diff mbox

[v6,2/8] crypto: add driver-side scomp interface

Message ID 1465373818-29720-3-git-send-email-giovanni.cabiddu@intel.com (mailing list archive)
State Changes Requested
Delegated to: Herbert Xu
Headers show

Commit Message

Cabiddu, Giovanni June 8, 2016, 8:16 a.m. UTC
Add a synchronous back-end (scomp) to acomp. This allows to easily expose
the already present compression algorithms in LKCF via acomp

Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
---
 crypto/Makefile                     |    1 +
 crypto/acompress.c                  |   49 +++++++-
 crypto/scompress.c                  |  252 +++++++++++++++++++++++++++++++++++
 include/crypto/acompress.h          |   32 ++---
 include/crypto/internal/acompress.h |   15 ++
 include/crypto/internal/scompress.h |  134 +++++++++++++++++++
 include/linux/crypto.h              |    2 +
 7 files changed, 463 insertions(+), 22 deletions(-)
 create mode 100644 crypto/scompress.c
 create mode 100644 include/crypto/internal/scompress.h

Comments

Herbert Xu June 13, 2016, 8:56 a.m. UTC | #1
On Wed, Jun 08, 2016 at 09:16:52AM +0100, Giovanni Cabiddu wrote:
>
> +static void *scomp_map(struct scatterlist *sg, unsigned int len,
> +		       gfp_t gfp_flags)
> +{
> +	void *buf;
> +
> +	if (sg_is_last(sg))
> +		return kmap_atomic(sg_page(sg)) + sg->offset;

This doesn't work, because kmap_atomic maps a single page only
bug an SG entry can be a super-page, e.g., a set of contiguous
pages.

> +	buf = kmalloc(len, gfp_flags);

The backup path is also very unlikely to work because we'll be
hitting this with 64K sizes and this just won't work with a 4K
page size.

So up until now we've getting around this 64K issue with vmalloc,
and then we try to conserve the precious vmalloc resource by using
per-cpu allocation.

This totally breaks down once you go to DMA, where an SG list is
required.  Unfortunately, this means that there is no easy way
to linearise the data for our software implementations.

There is no easy way out I'm afraid.  I think we'll have to bite
the bullet and refit our software algos so that they handle SG
lists.

Not only will this solve the problem at hand, it'll also mean that
acomp users will never have to do vmalloc so it's a win-win.  It
also means that we won't need the scomp interface at all.

This does bring up another question of who should be allocating the
output memory.  Up until now it has been up to the user to do so.
However, if our algos can actually handle SG lists then I think it
should be fairly easy to make them do the allocation instead.  What
do you think?

Thanks,
Cabiddu, Giovanni June 22, 2016, 3:53 p.m. UTC | #2
On Mon, Jun 13, 2016 at 04:56:12PM +0800, Herbert Xu wrote:
> The backup path is also very unlikely to work because we'll be
> hitting this with 64K sizes and this just won't work with a 4K
> page size.
Is scatterwalk_map_and_copy broken?

> So up until now we've getting around this 64K issue with vmalloc,
> and then we try to conserve the precious vmalloc resource by using
> per-cpu allocation.
I don't clearly understand what is the issue.
Can you please give me more details?

> This totally breaks down once you go to DMA, where an SG list is
> required. 
scomp backends should be used only for software implementations. 
A driver backend which needs DMA should plug into acomp.

> Unfortunately, this means that there is no easy way
> to linearise the data for our software implementations.
> 
> There is no easy way out I'm afraid.  I think we'll have to bite
> the bullet and refit our software algos so that they handle SG
> lists.
Although feasible, I think it wouldn't be an easy job.

> Not only will this solve the problem at hand, it'll also mean that
> acomp users will never have to do vmalloc so it's a win-win.  It
> also means that we won't need the scomp interface at all.
> 
> This does bring up another question of who should be allocating the
> output memory.  Up until now it has been up to the user to do so.
> However, if our algos can actually handle SG lists then I think it
> should be fairly easy to make them do the allocation instead.  What
> do you think?
I would prefer the user of the api to allocate and manage the output
memory.

Thanks,
Herbert Xu June 23, 2016, 10:50 a.m. UTC | #3
On Wed, Jun 22, 2016 at 04:53:50PM +0100, Giovanni Cabiddu wrote:
> On Mon, Jun 13, 2016 at 04:56:12PM +0800, Herbert Xu wrote:
> > The backup path is also very unlikely to work because we'll be
> > hitting this with 64K sizes and this just won't work with a 4K
> > page size.
> Is scatterwalk_map_and_copy broken?

No that's not the problem.  The problem is that you can't kmalloc
64K of memory.  kmalloc requires physically contiguous memory and
you cannot rely on having 64K of contiguous memory.

> > This totally breaks down once you go to DMA, where an SG list is
> > required. 
> scomp backends should be used only for software implementations. 
> A driver backend which needs DMA should plug into acomp.

What I'm saying is that the current strategy of using vmalloc
memory as input/output buffers cannot possibly work with acomp
since you cannot do DMA over vmalloc memory.

Cheers,
Cabiddu, Giovanni June 24, 2016, 8:37 a.m. UTC | #4
On Thu, Jun 23, 2016 at 06:50:34PM +0800, Herbert Xu wrote:
> No that's not the problem.  The problem is that you can't kmalloc
> 64K of memory.  kmalloc requires physically contiguous memory and
> you cannot rely on having 64K of contiguous memory.
It is clear now. Thanks.

> > > This totally breaks down once you go to DMA, where an SG list is
> > > required. 
> > scomp backends should be used only for software implementations. 
> > A driver backend which needs DMA should plug into acomp.
> 
> What I'm saying is that the current strategy of using vmalloc
> memory as input/output buffers cannot possibly work with acomp
> since you cannot do DMA over vmalloc memory.
I'll remove scomp and refit the software algos to plug into acomp
directly.
Would it be admissible if software algos implementations will vmalloc
the source and the destination buffers for linearizing the scatter gather
lists and will operate on those?

Thanks,
Herbert Xu June 24, 2016, 9:26 a.m. UTC | #5
On Fri, Jun 24, 2016 at 09:37:28AM +0100, Giovanni Cabiddu wrote:
>
> I'll remove scomp and refit the software algos to plug into acomp
> directly.
> Would it be admissible if software algos implementations will vmalloc
> the source and the destination buffers for linearizing the scatter gather
> lists and will operate on those?

Hmm, I guess we can still keep scomp and use vmalloc until someone
spends the effort and optimises each algorithm to make them use acomp
directly.

So I'd still like to move the allocation down into the algorithm.
That way IPsec no longer needs to keep around a 64K buffer when
the average packet size is less than a page.

What we can do for legacy scomp algorithms is to keep a per-cpu
cache of 64K scratch buffers allocated using vmalloc.  Obviously
this means that if the output size exceeds 64K then we will fail
the operation.  But I don't really see an option besides optimising
the algorithm to use acomp.

IOW let's move the memory allocation logic of IPComp into the scomp
layer.  Before we do that we should also make sure that no other
users of crypto compress needs output sizes in excess of 64K.

Cheers,
Cabiddu, Giovanni June 28, 2016, 7:41 a.m. UTC | #6
On Fri, Jun 24, 2016 at 05:26:43PM +0800, Herbert Xu wrote:
> Hmm, I guess we can still keep scomp and use vmalloc until someone
> spends the effort and optimises each algorithm to make them use acomp
> directly.
Ok.

> So I'd still like to move the allocation down into the algorithm.
> That way IPsec no longer needs to keep around a 64K buffer when
> the average packet size is less than a page.
> 
> What we can do for legacy scomp algorithms is to keep a per-cpu
> cache of 64K scratch buffers allocated using vmalloc.  Obviously
> this means that if the output size exceeds 64K then we will fail
> the operation.  But I don't really see an option besides optimising
> the algorithm to use acomp.
Are you suggesting a different cache of scratch buffers for every
algorithm implementation or a shared cache shared across all legacy
scomp algorithms?

> IOW let's move the memory allocation logic of IPComp into the scomp
> layer.  Before we do that we should also make sure that no other
> users of crypto compress needs output sizes in excess of 64K.
Would it be ok 128K instead?
We are proposing to use the acomp API from BTRFS. Limiting the size
of the source and destination buffers to 64K would not work since
BTRFS usually compresses 128KB. 
Here is the RFC sent by Weigang to the BTFS list:
http://www.spinics.net/lists/linux-btrfs/msg56648.html

Regards,
Herbert Xu June 28, 2016, 7:51 a.m. UTC | #7
On Tue, Jun 28, 2016 at 08:41:42AM +0100, Giovanni Cabiddu wrote:
>
> Are you suggesting a different cache of scratch buffers for every
> algorithm implementation or a shared cache shared across all legacy
> scomp algorithms?

One that's shared for every scomp algorithm.

> Would it be ok 128K instead?
> We are proposing to use the acomp API from BTRFS. Limiting the size
> of the source and destination buffers to 64K would not work since
> BTRFS usually compresses 128KB. 
> Here is the RFC sent by Weigang to the BTFS list:
> http://www.spinics.net/lists/linux-btrfs/msg56648.html

While I don't see any big differences between 64K and 128K, I have
noticed that btrfs is already doing partial decompression on a
page-by-page basis, which is the most optimal setup.

So whatever we do for this conversion we should make sure that
btrfs does not regress into using vmalloc.

Cheers,
diff mbox

Patch

diff --git a/crypto/Makefile b/crypto/Makefile
index e817b38..fc8fcfe 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -32,6 +32,7 @@  obj-$(CONFIG_CRYPTO_HASH2) += crypto_hash.o
 obj-$(CONFIG_CRYPTO_AKCIPHER2) += akcipher.o
 
 obj-$(CONFIG_CRYPTO_ACOMP2) += acompress.o
+obj-$(CONFIG_CRYPTO_ACOMP2) += scompress.o
 
 $(obj)/rsapubkey-asn1.o: $(obj)/rsapubkey-asn1.c $(obj)/rsapubkey-asn1.h
 $(obj)/rsaprivkey-asn1.o: $(obj)/rsaprivkey-asn1.c $(obj)/rsaprivkey-asn1.h
diff --git a/crypto/acompress.c b/crypto/acompress.c
index f24fef3..a5e6cf1 100644
--- a/crypto/acompress.c
+++ b/crypto/acompress.c
@@ -22,8 +22,11 @@ 
 #include <linux/cryptouser.h>
 #include <net/netlink.h>
 #include <crypto/internal/acompress.h>
+#include <crypto/internal/scompress.h>
 #include "internal.h"
 
+static const struct crypto_type crypto_acomp_type;
+
 #ifdef CONFIG_NET
 static int crypto_acomp_report(struct sk_buff *skb, struct crypto_alg *alg)
 {
@@ -67,6 +70,13 @@  static int crypto_acomp_init_tfm(struct crypto_tfm *tfm)
 	struct crypto_acomp *acomp = __crypto_acomp_tfm(tfm);
 	struct acomp_alg *alg = crypto_acomp_alg(acomp);
 
+	if (tfm->__crt_alg->cra_type != &crypto_acomp_type)
+		return crypto_init_scomp_ops_async(tfm);
+
+	acomp->compress = alg->compress;
+	acomp->decompress = alg->decompress;
+	acomp->reqsize = alg->reqsize;
+
 	if (alg->exit)
 		acomp->base.exit = crypto_acomp_exit_tfm;
 
@@ -76,15 +86,25 @@  static int crypto_acomp_init_tfm(struct crypto_tfm *tfm)
 	return 0;
 }
 
+unsigned int crypto_acomp_extsize(struct crypto_alg *alg)
+{
+	int extsize = crypto_alg_extsize(alg);
+
+	if (alg->cra_type != &crypto_acomp_type)
+		extsize += sizeof(struct crypto_scomp *);
+
+	return extsize;
+}
+
 static const struct crypto_type crypto_acomp_type = {
-	.extsize = crypto_alg_extsize,
+	.extsize = crypto_acomp_extsize,
 	.init_tfm = crypto_acomp_init_tfm,
 #ifdef CONFIG_PROC_FS
 	.show = crypto_acomp_show,
 #endif
 	.report = crypto_acomp_report,
 	.maskclear = ~CRYPTO_ALG_TYPE_MASK,
-	.maskset = CRYPTO_ALG_TYPE_MASK,
+	.maskset = CRYPTO_ALG_TYPE_ACOMPRESS_MASK,
 	.type = CRYPTO_ALG_TYPE_ACOMPRESS,
 	.tfmsize = offsetof(struct crypto_acomp, base),
 };
@@ -96,6 +116,31 @@  struct crypto_acomp *crypto_alloc_acomp(const char *alg_name, u32 type,
 }
 EXPORT_SYMBOL_GPL(crypto_alloc_acomp);
 
+struct acomp_req *acomp_request_alloc(struct crypto_acomp *acomp)
+{
+	struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
+	struct acomp_req *req;
+
+	req = __acomp_request_alloc(acomp);
+	if (req && (tfm->__crt_alg->cra_type != &crypto_acomp_type))
+		return crypto_acomp_scomp_alloc_ctx(req);
+
+	return req;
+}
+EXPORT_SYMBOL_GPL(acomp_request_alloc);
+
+void acomp_request_free(struct acomp_req *req)
+{
+	struct crypto_acomp *acomp = crypto_acomp_reqtfm(req);
+	struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
+
+	if (tfm->__crt_alg->cra_type != &crypto_acomp_type)
+		crypto_acomp_scomp_free_ctx(req);
+
+	__acomp_request_free(req);
+}
+EXPORT_SYMBOL_GPL(acomp_request_free);
+
 int crypto_register_acomp(struct acomp_alg *alg)
 {
 	struct crypto_alg *base = &alg->base;
diff --git a/crypto/scompress.c b/crypto/scompress.c
new file mode 100644
index 0000000..850b427
--- /dev/null
+++ b/crypto/scompress.c
@@ -0,0 +1,252 @@ 
+/*
+ * Synchronous Compression operations
+ *
+ * Copyright 2015 LG Electronics Inc.
+ * Copyright (c) 2016, Intel Corporation
+ * Author: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/seq_file.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/crypto.h>
+#include <crypto/algapi.h>
+#include <linux/cryptouser.h>
+#include <net/netlink.h>
+#include <crypto/scatterwalk.h>
+#include <crypto/internal/acompress.h>
+#include <crypto/internal/scompress.h>
+#include "internal.h"
+
+static const struct crypto_type crypto_scomp_type;
+
+#ifdef CONFIG_NET
+static int crypto_scomp_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+	struct crypto_report_comp rscomp;
+
+	strncpy(rscomp.type, "scomp", sizeof(rscomp.type));
+
+	if (nla_put(skb, CRYPTOCFGA_REPORT_COMPRESS,
+		    sizeof(struct crypto_report_comp), &rscomp))
+		goto nla_put_failure;
+	return 0;
+
+nla_put_failure:
+	return -EMSGSIZE;
+}
+#else
+static int crypto_scomp_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+	return -ENOSYS;
+}
+#endif
+
+static void crypto_scomp_show(struct seq_file *m, struct crypto_alg *alg)
+	__attribute__ ((unused));
+
+static void crypto_scomp_show(struct seq_file *m, struct crypto_alg *alg)
+{
+	seq_puts(m, "type         : scomp\n");
+}
+
+static int crypto_scomp_init_tfm(struct crypto_tfm *tfm)
+{
+	return 0;
+}
+
+static void *scomp_map(struct scatterlist *sg, unsigned int len,
+		       gfp_t gfp_flags)
+{
+	void *buf;
+
+	if (sg_is_last(sg))
+		return kmap_atomic(sg_page(sg)) + sg->offset;
+
+	buf = kmalloc(len, gfp_flags);
+	if (!buf)
+		return NULL;
+
+	scatterwalk_map_and_copy(buf, sg, 0, len, 0);
+
+	return buf;
+}
+
+static void scomp_unmap(struct scatterlist *sg, void *buf, unsigned int len)
+{
+	if (!buf)
+		return;
+
+	if (sg_is_last(sg)) {
+		kunmap_atomic(buf);
+		return;
+	}
+
+	scatterwalk_map_and_copy(buf, sg, 0, len, 1);
+	kfree(buf);
+}
+
+static int scomp_acomp_comp_decomp(struct acomp_req *req, int comp_dir)
+{
+	struct crypto_acomp *tfm = crypto_acomp_reqtfm(req);
+	void **tfm_ctx = acomp_tfm_ctx(tfm);
+	struct crypto_scomp *scomp = *tfm_ctx;
+	gfp_t gfp_flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
+			  GFP_KERNEL : GFP_ATOMIC;
+	void **ctx = acomp_request_ctx(req);
+	unsigned int slen = req->slen;
+	unsigned int dlen = req->dlen;
+	u8 *src;
+	u8 *dst;
+	int ret;
+
+	src = scomp_map(req->src, slen, gfp_flags);
+	if (!src) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	dst = scomp_map(req->dst, dlen, gfp_flags);
+	if (!dst) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	if (comp_dir)
+		ret = crypto_scomp_compress(scomp, src, slen, dst, &dlen,
+					    *ctx);
+	else
+		ret = crypto_scomp_decompress(scomp, src, slen, dst, &dlen,
+					      *ctx);
+
+	req->dlen = dlen;
+
+out:
+	scomp_unmap(req->src, src, 0);
+	scomp_unmap(req->dst, dst, (ret < 0) ? 0 : dlen);
+
+	return ret;
+}
+
+static int scomp_acomp_compress(struct acomp_req *req)
+{
+	return scomp_acomp_comp_decomp(req, 1);
+}
+
+static int scomp_acomp_decompress(struct acomp_req *req)
+{
+	return scomp_acomp_comp_decomp(req, 0);
+}
+
+static void crypto_exit_scomp_ops_async(struct crypto_tfm *tfm)
+{
+	struct crypto_scomp **ctx = crypto_tfm_ctx(tfm);
+
+	crypto_free_scomp(*ctx);
+}
+
+int crypto_init_scomp_ops_async(struct crypto_tfm *tfm)
+{
+	struct crypto_alg *calg = tfm->__crt_alg;
+	struct crypto_acomp *crt = __crypto_acomp_tfm(tfm);
+	struct crypto_scomp **ctx = crypto_tfm_ctx(tfm);
+	struct crypto_scomp *scomp;
+
+	if (!crypto_mod_get(calg))
+		return -EAGAIN;
+
+	scomp = crypto_create_tfm(calg, &crypto_scomp_type);
+	if (IS_ERR(scomp)) {
+		crypto_mod_put(calg);
+		return PTR_ERR(scomp);
+	}
+
+	*ctx = scomp;
+	tfm->exit = crypto_exit_scomp_ops_async;
+
+	crt->compress = scomp_acomp_compress;
+	crt->decompress = scomp_acomp_decompress;
+	crt->reqsize = sizeof(void *);
+
+	return 0;
+}
+
+struct acomp_req *crypto_acomp_scomp_alloc_ctx(struct acomp_req *req)
+{
+	struct crypto_acomp *acomp = crypto_acomp_reqtfm(req);
+	struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
+	struct crypto_scomp **tfm_ctx = crypto_tfm_ctx(tfm);
+	struct crypto_scomp *scomp = *tfm_ctx;
+	void *ctx;
+
+	ctx = crypto_scomp_alloc_ctx(scomp);
+	if (IS_ERR(ctx)) {
+		kfree(req);
+		return NULL;
+	}
+
+	*req->__ctx = ctx;
+
+	return req;
+}
+
+void crypto_acomp_scomp_free_ctx(struct acomp_req *req)
+{
+	struct crypto_acomp *acomp = crypto_acomp_reqtfm(req);
+	struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
+	struct crypto_scomp **tfm_ctx = crypto_tfm_ctx(tfm);
+	struct crypto_scomp *scomp = *tfm_ctx;
+	void *ctx = *req->__ctx;
+
+	if (ctx)
+		crypto_scomp_free_ctx(scomp, ctx);
+}
+
+static const struct crypto_type crypto_scomp_type = {
+	.extsize = crypto_alg_extsize,
+	.init_tfm = crypto_scomp_init_tfm,
+#ifdef CONFIG_PROC_FS
+	.show = crypto_scomp_show,
+#endif
+	.report = crypto_scomp_report,
+	.maskclear = ~CRYPTO_ALG_TYPE_MASK,
+	.maskset = CRYPTO_ALG_TYPE_MASK,
+	.type = CRYPTO_ALG_TYPE_SCOMPRESS,
+	.tfmsize = offsetof(struct crypto_scomp, base),
+};
+
+struct crypto_scomp *crypto_alloc_scomp(const char *alg_name, u32 type,
+					u32 mask)
+{
+	return crypto_alloc_tfm(alg_name, &crypto_scomp_type, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_alloc_scomp);
+
+int crypto_register_scomp(struct scomp_alg *alg)
+{
+	struct crypto_alg *base = &alg->base;
+
+	base->cra_type = &crypto_scomp_type;
+	base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
+	base->cra_flags |= CRYPTO_ALG_TYPE_SCOMPRESS;
+
+	return crypto_register_alg(base);
+}
+EXPORT_SYMBOL_GPL(crypto_register_scomp);
+
+int crypto_unregister_scomp(struct scomp_alg *alg)
+{
+	return crypto_unregister_alg(&alg->base);
+}
+EXPORT_SYMBOL_GPL(crypto_unregister_scomp);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Synchronous compression type");
diff --git a/include/crypto/acompress.h b/include/crypto/acompress.h
index f4e2f96..93f938d 100644
--- a/include/crypto/acompress.h
+++ b/include/crypto/acompress.h
@@ -38,9 +38,15 @@  struct acomp_req {
  * struct crypto_acomp - user-instantiated objects which encapsulate
  * algorithms and core processing logic
  *
- * @base: Common crypto API algorithm data structure
+ * @compress:   Function performs a compress operation
+ * @decompress: Function performs a de-compress operation
+ * @reqsize:	Context size for (de)compression requests
+ * @base:	Common crypto API algorithm data structure
  */
 struct crypto_acomp {
+	int (*compress)(struct acomp_req *req);
+	int (*decompress)(struct acomp_req *req);
+	unsigned int reqsize;
 	struct crypto_tfm base;
 };
 
@@ -119,7 +125,7 @@  static inline struct acomp_alg *crypto_acomp_alg(struct crypto_acomp *tfm)
 
 static inline unsigned int crypto_acomp_reqsize(struct crypto_acomp *tfm)
 {
-	return crypto_acomp_alg(tfm)->reqsize;
+	return tfm->reqsize;
 }
 
 static inline void acomp_request_set_tfm(struct acomp_req *req,
@@ -159,26 +165,14 @@  static inline int crypto_has_acomp(const char *alg_name, u32 type, u32 mask)
  *
  * Return: allocated handle in case of success or NULL in case of an error.
  */
-static inline struct acomp_req *acomp_request_alloc(struct crypto_acomp *tfm)
-{
-	struct acomp_req *req;
-
-	req = kzalloc(sizeof(*req) + crypto_acomp_reqsize(tfm), GFP_KERNEL);
-	if (likely(req))
-		acomp_request_set_tfm(req, tfm);
-
-	return req;
-}
+struct acomp_req *acomp_request_alloc(struct crypto_acomp *tfm);
 
 /**
  * acomp_request_free() -- zeroize and free asynchronous (de)compression request
  *
  * @req: request to free
  */
-static inline void acomp_request_free(struct acomp_req *req)
-{
-	kzfree(req);
-}
+void acomp_request_free(struct acomp_req *req);
 
 /**
  * acomp_request_set_callback() -- Sets an asynchronous callback
@@ -236,9 +230,8 @@  static inline void acomp_request_set_params(struct acomp_req *req,
 static inline int crypto_acomp_compress(struct acomp_req *req)
 {
 	struct crypto_acomp *tfm = crypto_acomp_reqtfm(req);
-	struct acomp_alg *alg = crypto_acomp_alg(tfm);
 
-	return alg->compress(req);
+	return tfm->compress(req);
 }
 
 /**
@@ -253,9 +246,8 @@  static inline int crypto_acomp_compress(struct acomp_req *req)
 static inline int crypto_acomp_decompress(struct acomp_req *req)
 {
 	struct crypto_acomp *tfm = crypto_acomp_reqtfm(req);
-	struct acomp_alg *alg = crypto_acomp_alg(tfm);
 
-	return alg->decompress(req);
+	return tfm->decompress(req);
 }
 
 #endif
diff --git a/include/crypto/internal/acompress.h b/include/crypto/internal/acompress.h
index 294f2ee..267afdd5 100644
--- a/include/crypto/internal/acompress.h
+++ b/include/crypto/internal/acompress.h
@@ -39,6 +39,21 @@  static inline const char *acomp_alg_name(struct crypto_acomp *tfm)
 	return crypto_acomp_tfm(tfm)->__crt_alg->cra_name;
 }
 
+static inline struct acomp_req *__acomp_request_alloc(struct crypto_acomp *tfm)
+{
+	struct acomp_req *req;
+
+	req = kzalloc(sizeof(*req) + crypto_acomp_reqsize(tfm), GFP_KERNEL);
+	if (likely(req))
+		acomp_request_set_tfm(req, tfm);
+	return req;
+}
+
+static inline void __acomp_request_free(struct acomp_req *req)
+{
+	kzfree(req);
+}
+
 /**
  * crypto_register_acomp() -- Register asynchronous compression algorithm
  *
diff --git a/include/crypto/internal/scompress.h b/include/crypto/internal/scompress.h
new file mode 100644
index 0000000..a88fc8d
--- /dev/null
+++ b/include/crypto/internal/scompress.h
@@ -0,0 +1,134 @@ 
+/*
+ * Synchronous Compression operations
+ *
+ * Copyright 2015 LG Electronics Inc.
+ * Copyright (c) 2016, Intel Corporation
+ * Author: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#ifndef _CRYPTO_SCOMP_INT_H
+#define _CRYPTO_SCOMP_INT_H
+#include <linux/crypto.h>
+
+struct crypto_scomp {
+	struct crypto_tfm base;
+};
+
+/**
+ * struct scomp_alg - synchronous compression algorithm
+ *
+ * @alloc_ctx:	Function allocates algorithm specific context
+ * @free_ctx:	Function frees context allocated with alloc_ctx
+ * @compress:	Function performs a compress operation
+ * @decompress:	Function performs a de-compress operation
+ * @init:	Initialize the cryptographic transformation object.
+ *		This function is used to initialize the cryptographic
+ *		transformation object. This function is called only once at
+ *		the instantiation time, right after the transformation context
+ *		was allocated. In case the cryptographic hardware has some
+ *		special requirements which need to be handled by software, this
+ *		function shall check for the precise requirement of the
+ *		transformation and put any software fallbacks in place.
+ * @exit:	Deinitialize the cryptographic transformation object. This is a
+ *		counterpart to @init, used to remove various changes set in
+ *		@init.
+ * @base:	Common crypto API algorithm data structure
+ */
+struct scomp_alg {
+	void *(*alloc_ctx)(struct crypto_scomp *tfm);
+	void (*free_ctx)(struct crypto_scomp *tfm, void *ctx);
+	int (*compress)(struct crypto_scomp *tfm, const u8 *src,
+			unsigned int slen, u8 *dst, unsigned int *dlen,
+			void *ctx);
+	int (*decompress)(struct crypto_scomp *tfm, const u8 *src,
+			  unsigned int slen, u8 *dst, unsigned int *dlen,
+			  void *ctx);
+	struct crypto_alg base;
+};
+
+static inline struct scomp_alg *__crypto_scomp_alg(struct crypto_alg *alg)
+{
+	return container_of(alg, struct scomp_alg, base);
+}
+
+static inline struct crypto_scomp *__crypto_scomp_tfm(struct crypto_tfm *tfm)
+{
+	return container_of(tfm, struct crypto_scomp, base);
+}
+
+static inline struct crypto_tfm *crypto_scomp_tfm(struct crypto_scomp *tfm)
+{
+	return &tfm->base;
+}
+
+static inline void crypto_free_scomp(struct crypto_scomp *tfm)
+{
+	crypto_destroy_tfm(tfm, crypto_scomp_tfm(tfm));
+}
+
+static inline struct scomp_alg *crypto_scomp_alg(struct crypto_scomp *tfm)
+{
+	return __crypto_scomp_alg(crypto_scomp_tfm(tfm)->__crt_alg);
+}
+
+static inline void *crypto_scomp_alloc_ctx(struct crypto_scomp *tfm)
+{
+	return crypto_scomp_alg(tfm)->alloc_ctx(tfm);
+}
+
+static inline void crypto_scomp_free_ctx(struct crypto_scomp *tfm,
+					 void *ctx)
+{
+	return crypto_scomp_alg(tfm)->free_ctx(tfm, ctx);
+}
+
+static inline int crypto_scomp_compress(struct crypto_scomp *tfm,
+					const u8 *src, unsigned int slen,
+					u8 *dst, unsigned int *dlen, void *ctx)
+{
+	return crypto_scomp_alg(tfm)->compress(tfm, src, slen, dst, dlen, ctx);
+}
+
+static inline int crypto_scomp_decompress(struct crypto_scomp *tfm,
+					  const u8 *src, unsigned int slen,
+					  u8 *dst, unsigned int *dlen,
+					  void *ctx)
+{
+	return crypto_scomp_alg(tfm)->decompress(tfm, src, slen, dst, dlen,
+						 ctx);
+}
+
+int crypto_init_scomp_ops_async(struct crypto_tfm *tfm);
+struct acomp_req *crypto_acomp_scomp_alloc_ctx(struct acomp_req *req);
+void crypto_acomp_scomp_free_ctx(struct acomp_req *req);
+
+/**
+ * crypto_register_scomp() -- Register synchronous compression algorithm
+ *
+ * Function registers an implementation of a synchronous
+ * compression algorithm
+ *
+ * @alg:	algorithm definition
+ *
+ * Return: zero on success; error code in case of error
+ */
+int crypto_register_scomp(struct scomp_alg *alg);
+
+/**
+ * crypto_unregister_scomp() -- Unregister synchronous compression algorithm
+ *
+ * Function unregisters an implementation of a synchronous
+ * compression algorithm
+ *
+ * @alg:	algorithm definition
+ *
+ * Return: zero on success; error code in case of error
+ */
+int crypto_unregister_scomp(struct scomp_alg *alg);
+
+#endif
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index 7987323..23ceb0b 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -49,6 +49,7 @@ 
 #define CRYPTO_ALG_TYPE_ABLKCIPHER	0x00000005
 #define CRYPTO_ALG_TYPE_GIVCIPHER	0x00000006
 #define CRYPTO_ALG_TYPE_ACOMPRESS	0x00000008
+#define CRYPTO_ALG_TYPE_SCOMPRESS	0x0000000a
 #define CRYPTO_ALG_TYPE_RNG		0x0000000c
 #define CRYPTO_ALG_TYPE_AKCIPHER	0x0000000d
 #define CRYPTO_ALG_TYPE_DIGEST		0x0000000e
@@ -59,6 +60,7 @@ 
 #define CRYPTO_ALG_TYPE_HASH_MASK	0x0000000e
 #define CRYPTO_ALG_TYPE_AHASH_MASK	0x0000000e
 #define CRYPTO_ALG_TYPE_BLKCIPHER_MASK	0x0000000c
+#define CRYPTO_ALG_TYPE_ACOMPRESS_MASK	0x0000000c
 
 #define CRYPTO_ALG_LARVAL		0x00000010
 #define CRYPTO_ALG_DEAD			0x00000020