diff mbox series

[1/9] crypto: add zbufsize() interface

Message ID 20180802215118.17752-2-keescook@chromium.org (mailing list archive)
State Changes Requested
Delegated to: Herbert Xu
Headers show
Series crypto: add zbufsize() interface | expand

Commit Message

Kees Cook Aug. 2, 2018, 9:51 p.m. UTC
When pstore was refactored to use the crypto compress API in:

  commit cb3bee0369bc ("pstore: Use crypto compress API")

nearly all the pstore-specific compression routines were replaced with
the existing crypto compression API. One case remained: calculating the
"worst case" compression sizes ahead of time so it could have a buffer
preallocated for doing compression (which was called "zbufsize").

To make pstore fully algorithm-agnostic, the compression API needs to
grow this functionality. This adds the interface to support querying the
"worst case" estimate, with a new "zbufsize" routine that each compressor
can implement. The per-compressor implementations come in later commits.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 crypto/compress.c                   |  9 +++++++++
 include/crypto/internal/scompress.h | 11 +++++++++++
 include/linux/crypto.h              | 12 ++++++++++++
 3 files changed, 32 insertions(+)

Comments

Herbert Xu Aug. 7, 2018, 9:45 a.m. UTC | #1
On Thu, Aug 02, 2018 at 02:51:10PM -0700, Kees Cook wrote:
> When pstore was refactored to use the crypto compress API in:
> 
>   commit cb3bee0369bc ("pstore: Use crypto compress API")
> 
> nearly all the pstore-specific compression routines were replaced with
> the existing crypto compression API. One case remained: calculating the
> "worst case" compression sizes ahead of time so it could have a buffer
> preallocated for doing compression (which was called "zbufsize").
> 
> To make pstore fully algorithm-agnostic, the compression API needs to
> grow this functionality. This adds the interface to support querying the
> "worst case" estimate, with a new "zbufsize" routine that each compressor
> can implement. The per-compressor implementations come in later commits.
> 
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>  crypto/compress.c                   |  9 +++++++++
>  include/crypto/internal/scompress.h | 11 +++++++++++
>  include/linux/crypto.h              | 12 ++++++++++++
>  3 files changed, 32 insertions(+)
> 
> diff --git a/crypto/compress.c b/crypto/compress.c
> index f2d522924a07..29a80bb3b9d3 100644
> --- a/crypto/compress.c
> +++ b/crypto/compress.c
> @@ -33,12 +33,21 @@ static int crypto_decompress(struct crypto_tfm *tfm,
>  	                                                   dlen);
>  }
>  
> +static int crypto_zbufsize(struct crypto_tfm *tfm,
> +			   unsigned int slen, unsigned int *dlen)
> +{
> +	if (!tfm->__crt_alg->cra_compress.coa_zbufsize)
> +		return -ENOTSUPP;
> +	return tfm->__crt_alg->cra_compress.coa_zbufsize(tfm, slen, dlen);
> +}

Please don't add new features to the old compress interface.  Any
new improvements should be added to scomp/acomp only.  Users who
need new features should be converted.

Thanks,
Kees Cook Aug. 7, 2018, 6:10 p.m. UTC | #2
On Tue, Aug 7, 2018 at 2:45 AM, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Thu, Aug 02, 2018 at 02:51:10PM -0700, Kees Cook wrote:
>> When pstore was refactored to use the crypto compress API in:
>>
>>   commit cb3bee0369bc ("pstore: Use crypto compress API")
>>
>> nearly all the pstore-specific compression routines were replaced with
>> the existing crypto compression API. One case remained: calculating the
>> "worst case" compression sizes ahead of time so it could have a buffer
>> preallocated for doing compression (which was called "zbufsize").
>>
>> To make pstore fully algorithm-agnostic, the compression API needs to
>> grow this functionality. This adds the interface to support querying the
>> "worst case" estimate, with a new "zbufsize" routine that each compressor
>> can implement. The per-compressor implementations come in later commits.
>>
>> Signed-off-by: Kees Cook <keescook@chromium.org>
>> ---
>>  crypto/compress.c                   |  9 +++++++++
>>  include/crypto/internal/scompress.h | 11 +++++++++++
>>  include/linux/crypto.h              | 12 ++++++++++++
>>  3 files changed, 32 insertions(+)
>>
>> diff --git a/crypto/compress.c b/crypto/compress.c
>> index f2d522924a07..29a80bb3b9d3 100644
>> --- a/crypto/compress.c
>> +++ b/crypto/compress.c
>> @@ -33,12 +33,21 @@ static int crypto_decompress(struct crypto_tfm *tfm,
>>                                                          dlen);
>>  }
>>
>> +static int crypto_zbufsize(struct crypto_tfm *tfm,
>> +                        unsigned int slen, unsigned int *dlen)
>> +{
>> +     if (!tfm->__crt_alg->cra_compress.coa_zbufsize)
>> +             return -ENOTSUPP;
>> +     return tfm->__crt_alg->cra_compress.coa_zbufsize(tfm, slen, dlen);
>> +}
>
> Please don't add new features to the old compress interface.  Any
> new improvements should be added to scomp/acomp only.  Users who
> need new features should be converted.

So, keep crypto_scomp_zbufsize() and drop crypto_comp_zbufsize() and
crypto_zbufsize()? Should I add crypto_acomp_zbufsize()?

-Kees
Herbert Xu Aug. 8, 2018, 2:53 a.m. UTC | #3
On Tue, Aug 07, 2018 at 11:10:10AM -0700, Kees Cook wrote:
>
> > Please don't add new features to the old compress interface.  Any
> > new improvements should be added to scomp/acomp only.  Users who
> > need new features should be converted.
> 
> So, keep crypto_scomp_zbufsize() and drop crypto_comp_zbufsize() and
> crypto_zbufsize()? Should I add crypto_acomp_zbufsize()?

Yes and yes.  acomp is the primary interface and should support
all the features in scomp.

Thanks,
Kees Cook Dec. 1, 2021, 11:39 p.m. UTC | #4
On Wed, Aug 08, 2018 at 10:53:19AM +0800, Herbert Xu wrote:
> On Tue, Aug 07, 2018 at 11:10:10AM -0700, Kees Cook wrote:
> >
> > > Please don't add new features to the old compress interface.  Any
> > > new improvements should be added to scomp/acomp only.  Users who
> > > need new features should be converted.
> > 
> > So, keep crypto_scomp_zbufsize() and drop crypto_comp_zbufsize() and
> > crypto_zbufsize()? Should I add crypto_acomp_zbufsize()?
> 
> Yes and yes.  acomp is the primary interface and should support
> all the features in scomp.

*thread necromancy*

Okay, I'm looking at this again because of the need in the module loader
to know "worst case decompression size"[1]. I am at a loss for how (or
why) the acomp interface is the "primary interface".

For modules, all that would be wanted is this, where the buffer size can
be allocated on demand:

u8 *decompressed = NULL;
size_t decompressed_size = 0;

decompressed = decompress(decompressed, compressed, compressed_size, &decompressed_size);

For pstore, the compressed_size is fixed and the decompression buffer
must be preallocated (for catching panic dumps), so the worst-case size
needs to be known in advance:

u8 *decompressed = NULL;
size_t decompressed_worst_size = 0;
size_t decompressed_size = 0;

worst_case(&decompressed_worst_size, compressed_size);

decompressed = kmalloc(decompressed_worst_size, GFP_KERNEL);
...
decompressed_size = decompressed_worst_size;
decompress(decompressed, compressed, compressed_size, &decompressed_size);


I don't see anything like this in the kernel for handling a simple
buffer-to-buffer decompression besides crypto_comp_decompress(). The
acomp interface is wildly over-complex for this. What the right
way to do this? (I can't find any documentation that discusses
compress/decompress[2].)

-Kees

[1] https://lore.kernel.org/linux-modules/YaMYJv539OEBz5B%2F@google.com/
[2] https://www.kernel.org/doc/html/latest/crypto/api-samples.html
Herbert Xu Dec. 2, 2021, 1:58 a.m. UTC | #5
On Wed, Dec 01, 2021 at 03:39:06PM -0800, Kees Cook wrote:
>
> Okay, I'm looking at this again because of the need in the module loader
> to know "worst case decompression size"[1]. I am at a loss for how (or
> why) the acomp interface is the "primary interface".

This is similar to how we transitioned from the old hash interface
to shash/ahash.

Basically the legacy interface is synchronous only and cannot support
hardware devices without having the CPU spinning while waiting for the
result to come back.

If you only care about synchronous support and don't need to access
these hardware devices then you should use the new scomp interface
that's equivalent to the old compress interface but built in a way
to allow acomp users to easily access sync algorithms, but if you
are processing large amounts of data and wish to access offload devices
then you should consider using the async acomp interface.

> For modules, all that would be wanted is this, where the buffer size can
> be allocated on demand:
> 
> u8 *decompressed = NULL;
> size_t decompressed_size = 0;
> 
> decompressed = decompress(decompressed, compressed, compressed_size, &decompressed_size);
> 
> For pstore, the compressed_size is fixed and the decompression buffer
> must be preallocated (for catching panic dumps), so the worst-case size
> needs to be known in advance:
> 
> u8 *decompressed = NULL;
> size_t decompressed_worst_size = 0;
> size_t decompressed_size = 0;
> 
> worst_case(&decompressed_worst_size, compressed_size);
> 
> decompressed = kmalloc(decompressed_worst_size, GFP_KERNEL);
> ...
> decompressed_size = decompressed_worst_size;
> decompress(decompressed, compressed, compressed_size, &decompressed_size);
> 
> 
> I don't see anything like this in the kernel for handling a simple
> buffer-to-buffer decompression besides crypto_comp_decompress(). The
> acomp interface is wildly over-complex for this. What the right
> way to do this? (I can't find any documentation that discusses
> compress/decompress[2].)

I think you're asking about a different issue, which is that we
don't have an interface for on-the-go allocation of decompressed
results so every user has to allocate the maximum worst-case buffer.

This is definitely an area that should be addressed but a lot of work
needs to be done to get there.  Essentially we'd need to convert
the underlying algorithms to a model where they decompress into a
list of pages and then they could simply allocate a new page if they
need extra space.

The result can then be returned to the original caller as an SG list.

Would you be willing to work on something like this? This would benefit
all existing users too.  For example, IPsec would no longer need to
allocate those 64K buffers for IPcomp.

Unfortunately not many people care deeply about compression so
volunteers are hard to find.

Cheers,
Kees Cook Dec. 2, 2021, 3:51 a.m. UTC | #6
On Thu, Dec 02, 2021 at 12:58:20PM +1100, Herbert Xu wrote:
> On Wed, Dec 01, 2021 at 03:39:06PM -0800, Kees Cook wrote:
> >
> > Okay, I'm looking at this again because of the need in the module loader
> > to know "worst case decompression size"[1]. I am at a loss for how (or
> > why) the acomp interface is the "primary interface".
> 
> This is similar to how we transitioned from the old hash interface
> to shash/ahash.
> 
> Basically the legacy interface is synchronous only and cannot support
> hardware devices without having the CPU spinning while waiting for the
> result to come back.
> 
> If you only care about synchronous support and don't need to access
> these hardware devices then you should use the new scomp interface
> that's equivalent to the old compress interface but built in a way
> to allow acomp users to easily access sync algorithms, but if you
> are processing large amounts of data and wish to access offload devices
> then you should consider using the async acomp interface.

But the scomp API appears to be "internal only":

include/crypto/internal/scompress.h:static inline int crypto_scomp_decompress(struct crypto_scomp *tfm,

What's the correct API calling sequence to do a simple decompress?

> > For modules, all that would be wanted is this, where the buffer size can
> > be allocated on demand:
> > 
> > u8 *decompressed = NULL;
> > size_t decompressed_size = 0;
> > 
> > decompressed = decompress(decompressed, compressed, compressed_size, &decompressed_size);
> > 
> > For pstore, the compressed_size is fixed and the decompression buffer
> > must be preallocated (for catching panic dumps), so the worst-case size
> > needs to be known in advance:
> > 
> > u8 *decompressed = NULL;
> > size_t decompressed_worst_size = 0;
> > size_t decompressed_size = 0;
> > 
> > worst_case(&decompressed_worst_size, compressed_size);
> > 
> > decompressed = kmalloc(decompressed_worst_size, GFP_KERNEL);
> > ...
> > decompressed_size = decompressed_worst_size;
> > decompress(decompressed, compressed, compressed_size, &decompressed_size);
> > 
> > 
> > I don't see anything like this in the kernel for handling a simple
> > buffer-to-buffer decompression besides crypto_comp_decompress(). The
> > acomp interface is wildly over-complex for this. What the right
> > way to do this? (I can't find any documentation that discusses
> > compress/decompress[2].)
> 
> I think you're asking about a different issue, which is that we
> don't have an interface for on-the-go allocation of decompressed
> results so every user has to allocate the maximum worst-case buffer.
> 
> This is definitely an area that should be addressed but a lot of work
> needs to be done to get there.  Essentially we'd need to convert
> the underlying algorithms to a model where they decompress into a
> list of pages and then they could simply allocate a new page if they
> need extra space.
> 
> The result can then be returned to the original caller as an SG list.
> 
> Would you be willing to work on something like this? This would benefit
> all existing users too.  For example, IPsec would no longer need to
> allocate those 64K buffers for IPcomp.
> 
> Unfortunately not many people care deeply about compression so
> volunteers are hard to find.

Dmitry has, I think, a bit of this already in [1] that could maybe be
generalized if it'd needed?

pstore still needs the "worst case" API to do a preallocation, though.

Anyway, if I could have an example of how to use scomp in pstore, I
could better see where to wire up the proposed zbufsize API...

Thanks!

[1] https://lore.kernel.org/linux-modules/YaMYJv539OEBz5B%2F@google.com/#Z31kernel:module_decompress.c
Herbert Xu Dec. 2, 2021, 3:57 a.m. UTC | #7
On Wed, Dec 01, 2021 at 07:51:25PM -0800, Kees Cook wrote:
>
> But the scomp API appears to be "internal only":
> 
> include/crypto/internal/scompress.h:static inline int crypto_scomp_decompress(struct crypto_scomp *tfm,
> 
> What's the correct API calling sequence to do a simple decompress?

OK we haven't wired up scomp to users because there was no user
to start with.  So if you like you can create it just as we did
for shash.

The question becomes do you want to restrict your use-case to
synchronous-only algorithms, i.e., you will never be able to access
offload devices that support compression?

Typically this would only make sense if you process a very small
amount of data, but this seems counter-intuitive with compression
(it does make sense with hashing where we often hash just 16 bytes).

Cheers,
Kees Cook Dec. 2, 2021, 8:10 a.m. UTC | #8
On Thu, Dec 02, 2021 at 02:57:27PM +1100, Herbert Xu wrote:
> On Wed, Dec 01, 2021 at 07:51:25PM -0800, Kees Cook wrote:
> >
> > But the scomp API appears to be "internal only":
> > 
> > include/crypto/internal/scompress.h:static inline int crypto_scomp_decompress(struct crypto_scomp *tfm,
> > 
> > What's the correct API calling sequence to do a simple decompress?
> 
> OK we haven't wired up scomp to users because there was no user
> to start with.  So if you like you can create it just as we did
> for shash.
> 
> The question becomes do you want to restrict your use-case to
> synchronous-only algorithms, i.e., you will never be able to access
> offload devices that support compression?

I'd rather just have a simple API that hid all the async (or sync) details
and would work with whatever was the "best" implementation. Neither pstore
nor the module loader has anything else to do while decompression happens.

> Typically this would only make sense if you process a very small
> amount of data, but this seems counter-intuitive with compression
> (it does make sense with hashing where we often hash just 16 bytes).

pstore works on usually a handful of small buffers. (One of the largest
I've seen is used by Chrome OS: 6 128K buffers.) Speed is not important
(done at most 6 times at boot, and 1 time on panic), and, in fact,
offload is probably a bad idea just to keep the machinery needed to
store a panic log as small as possible.

The module loader is also doing non-fast-path decompression of modules,
with each of those being maybe a couple megabytes. This isn't fast-path
either: if it's not the kernel, it'd be userspace doing the decompression,
and it only happens once per module, usually at boot.

Why can't crypto_comp_*() be refactored to wrap crypto_acomp_*() (and
crypto_scomp_*())? I can see so many other places that would benefit from
this. Here are just some of the places that appear to be hand-rolling
compression/decompression routines that might benefit from this kind of
code re-use and compression alg agnosticism:

fs/pstore/platform.c
drivers/gpu/drm/i915/i915_gpu_error.c
kernel/power/swap.c
arch/powerpc/kernel/nvram_64.c
security/apparmor/policy_unpack.c
drivers/base/regmap/regcache-lzo.c
fs/btrfs/lzo.c
fs/btrfs/zlib.c
fs/f2fs/compress.c
fs/jffs2/compr_lzo.c
drivers/net/ethernet/chelsio/cxgb4/cudbg_zlib.h
drivers/net/ppp/ppp_deflate.c
fs/jffs2/compr_lzo.c
fs/jffs2/compr_zlib.c

But right now there isn't a good way to just do a simple one-off:

	dst = decompress_named(alg_name, dst, dst_len, src, src_len);

or if it happens more than once:

	alg = compressor(alg_name);
	set_comp_alg_param(param, value);
        ...
        for (...) {
		...
		dst = compress(alg, dst, dst_len, src, src_len);
		...
	}
        ...
        free_compressor(alg);

-Kees
Herbert Xu Dec. 3, 2021, 2:28 a.m. UTC | #9
On Thu, Dec 02, 2021 at 12:10:13AM -0800, Kees Cook wrote:
>
> I'd rather just have a simple API that hid all the async (or sync) details
> and would work with whatever was the "best" implementation. Neither pstore
> nor the module loader has anything else to do while decompression happens.

Well that's exactly what the acomp interface is supposed to be.
It supports any algorithm, whether sync or async.  However, for
obvious reasons this interface has to be async.

> > Typically this would only make sense if you process a very small
> > amount of data, but this seems counter-intuitive with compression
> > (it does make sense with hashing where we often hash just 16 bytes).
> 
> pstore works on usually a handful of small buffers. (One of the largest
> I've seen is used by Chrome OS: 6 128K buffers.) Speed is not important
> (done at most 6 times at boot, and 1 time on panic), and, in fact,
> offload is probably a bad idea just to keep the machinery needed to
> store a panic log as small as possible.

In that case creating an scomp user interface is probably the best
course of action.

> Why can't crypto_comp_*() be refactored to wrap crypto_acomp_*() (and
> crypto_scomp_*())? I can see so many other places that would benefit from
> this. Here are just some of the places that appear to be hand-rolling
> compression/decompression routines that might benefit from this kind of
> code re-use and compression alg agnosticism:

We cannot provide async hardware through a sync-only interface
because that may lead to dead-lock.  For your use-cases you should
avoid using any async implementations.

The scomp interface is meant to be pretty much identical to the
legacy comp interface except that it supports integration with
acomp.

Because nobody has had a need for scomp we have not added an
interface for it so it only exists as part of the low-level API.
You're most welcome to expose it if you don't need the async
support part of acomp.

Cheers,
Dmitry Torokhov Dec. 3, 2021, 8:49 p.m. UTC | #10
On Fri, Dec 03, 2021 at 01:28:21PM +1100, Herbert Xu wrote:
> On Thu, Dec 02, 2021 at 12:10:13AM -0800, Kees Cook wrote:
> >
> > I'd rather just have a simple API that hid all the async (or sync) details
> > and would work with whatever was the "best" implementation. Neither pstore
> > nor the module loader has anything else to do while decompression happens.
> 
> Well that's exactly what the acomp interface is supposed to be.
> It supports any algorithm, whether sync or async.  However, for
> obvious reasons this interface has to be async.
> 
> > > Typically this would only make sense if you process a very small
> > > amount of data, but this seems counter-intuitive with compression
> > > (it does make sense with hashing where we often hash just 16 bytes).
> > 
> > pstore works on usually a handful of small buffers. (One of the largest
> > I've seen is used by Chrome OS: 6 128K buffers.) Speed is not important
> > (done at most 6 times at boot, and 1 time on panic), and, in fact,
> > offload is probably a bad idea just to keep the machinery needed to
> > store a panic log as small as possible.
> 
> In that case creating an scomp user interface is probably the best
> course of action.
> 
> > Why can't crypto_comp_*() be refactored to wrap crypto_acomp_*() (and
> > crypto_scomp_*())? I can see so many other places that would benefit from
> > this. Here are just some of the places that appear to be hand-rolling
> > compression/decompression routines that might benefit from this kind of
> > code re-use and compression alg agnosticism:
> 
> We cannot provide async hardware through a sync-only interface
> because that may lead to dead-lock.  For your use-cases you should
> avoid using any async implementations.
> 
> The scomp interface is meant to be pretty much identical to the
> legacy comp interface except that it supports integration with
> acomp.
> 
> Because nobody has had a need for scomp we have not added an
> interface for it so it only exists as part of the low-level API.
> You're most welcome to expose it if you don't need the async
> support part of acomp.

I must be getting lost in terminology, and it feels to me that what is
discussed here is most likely of no interest to a lot of potential
users, especially ones that do compression/decompression. In majority of
cases they want to simply compress or decompress data, and they just
want to do it quickly and with minimal amount of memory consumed. They
do not particularly care if the task is being offloaded or executed on
the main CPU, either on separate thread or on the same thread, so the
discussion about scomp/acomp/etc is of no interest to them. From their
perspective they'd be totally fine with a wrapper that would do:

int decompress(...) {
	prepare_request()
	send_request()
	wait_for_request()
}

and from their perspective this would be a synchronous API they are
happy with.

So from POV of such users what is actually missing is streaming mode of
compressing/decompressing where core would allow supplying additonal
data on demand and allow consuming output as it is being produced, and I
do not see anything like that in either scomp or acomp.

Thanks.
Herbert Xu Dec. 7, 2021, 5:20 a.m. UTC | #11
On Fri, Dec 03, 2021 at 12:49:26PM -0800, Dmitry Torokhov wrote:
>
> I must be getting lost in terminology, and it feels to me that what is
> discussed here is most likely of no interest to a lot of potential
> users, especially ones that do compression/decompression. In majority of
> cases they want to simply compress or decompress data, and they just
> want to do it quickly and with minimal amount of memory consumed. They
> do not particularly care if the task is being offloaded or executed on
> the main CPU, either on separate thread or on the same thread, so the
> discussion about scomp/acomp/etc is of no interest to them. From their
> perspective they'd be totally fine with a wrapper that would do:
> 
> int decompress(...) {
> 	prepare_request()
> 	send_request()
> 	wait_for_request()
> }
> 
> and from their perspective this would be a synchronous API they are
> happy with.

You can certainly do that as a Crypto API user.  And we do have
some users who do exactly this (for example, testmgr does that
when testing async algorithms).  However, this can't be a part of
the API itself since many of our users execute in atomic contexts.

> So from POV of such users what is actually missing is streaming mode of
> compressing/decompressing where core would allow supplying additonal
> data on demand and allow consuming output as it is being produced, and I
> do not see anything like that in either scomp or acomp.

That is indeed a very crucial part of the compression API that is
missing.  And I would love someone to donate some time to addressing
this.

Thanks,
Dmitry Torokhov Dec. 7, 2021, 6:24 a.m. UTC | #12
On Tue, Dec 07, 2021 at 04:20:29PM +1100, Herbert Xu wrote:
> On Fri, Dec 03, 2021 at 12:49:26PM -0800, Dmitry Torokhov wrote:
> >
> > I must be getting lost in terminology, and it feels to me that what is
> > discussed here is most likely of no interest to a lot of potential
> > users, especially ones that do compression/decompression. In majority of
> > cases they want to simply compress or decompress data, and they just
> > want to do it quickly and with minimal amount of memory consumed. They
> > do not particularly care if the task is being offloaded or executed on
> > the main CPU, either on separate thread or on the same thread, so the
> > discussion about scomp/acomp/etc is of no interest to them. From their
> > perspective they'd be totally fine with a wrapper that would do:
> > 
> > int decompress(...) {
> > 	prepare_request()
> > 	send_request()
> > 	wait_for_request()
> > }
> > 
> > and from their perspective this would be a synchronous API they are
> > happy with.
> 
> You can certainly do that as a Crypto API user.  And we do have
> some users who do exactly this (for example, testmgr does that
> when testing async algorithms).  However, this can't be a part of
> the API itself since many of our users execute in atomic contexts.

That is what I am confused about: why can't it be a part of API? Users
that are running in atomic contexts would not be able to use it, but we
have a lot of precedents for it. See for example spi_sync() vs
spi_async(). Callers have a choice as to which one to use, based on
their needs.

Thanks.
Herbert Xu Dec. 7, 2021, 6:27 a.m. UTC | #13
On Mon, Dec 06, 2021 at 10:24:47PM -0800, Dmitry Torokhov wrote:
>
> That is what I am confused about: why can't it be a part of API? Users
> that are running in atomic contexts would not be able to use it, but we
> have a lot of precedents for it. See for example spi_sync() vs
> spi_async(). Callers have a choice as to which one to use, based on
> their needs.

We already have a helper in the form of crypto_wait_req.  If you
have any suggestions of making this easier to use then I'm more than
happy to consider them.

Thanks,
diff mbox series

Patch

diff --git a/crypto/compress.c b/crypto/compress.c
index f2d522924a07..29a80bb3b9d3 100644
--- a/crypto/compress.c
+++ b/crypto/compress.c
@@ -33,12 +33,21 @@  static int crypto_decompress(struct crypto_tfm *tfm,
 	                                                   dlen);
 }
 
+static int crypto_zbufsize(struct crypto_tfm *tfm,
+			   unsigned int slen, unsigned int *dlen)
+{
+	if (!tfm->__crt_alg->cra_compress.coa_zbufsize)
+		return -ENOTSUPP;
+	return tfm->__crt_alg->cra_compress.coa_zbufsize(tfm, slen, dlen);
+}
+
 int crypto_init_compress_ops(struct crypto_tfm *tfm)
 {
 	struct compress_tfm *ops = &tfm->crt_compress;
 
 	ops->cot_compress = crypto_compress;
 	ops->cot_decompress = crypto_decompress;
+	ops->cot_zbufsize = crypto_zbufsize;
 
 	return 0;
 }
diff --git a/include/crypto/internal/scompress.h b/include/crypto/internal/scompress.h
index 0f6ddac1acfc..a4a2a55080ad 100644
--- a/include/crypto/internal/scompress.h
+++ b/include/crypto/internal/scompress.h
@@ -39,6 +39,8 @@  struct scomp_alg {
 	int (*decompress)(struct crypto_scomp *tfm, const u8 *src,
 			  unsigned int slen, u8 *dst, unsigned int *dlen,
 			  void *ctx);
+	int (*zbufsize)(struct crypto_scomp *tfm, unsigned int slen,
+			unsigned int *dlen, void *ctx);
 	struct crypto_alg base;
 };
 
@@ -94,6 +96,15 @@  static inline int crypto_scomp_decompress(struct crypto_scomp *tfm,
 						 ctx);
 }
 
+static inline int crypto_scomp_zbufsize(struct crypto_scomp *tfm,
+					unsigned int slen,
+					unsigned int *dlen, void *ctx)
+{
+	if (!crypto_scomp_alg(tfm)->zbufsize)
+		return -ENOTSUPP;
+	return crypto_scomp_alg(tfm)->zbufsize(tfm, slen, dlen, ctx);
+}
+
 int crypto_init_scomp_ops_async(struct crypto_tfm *tfm);
 struct acomp_req *crypto_acomp_scomp_alloc_ctx(struct acomp_req *req);
 void crypto_acomp_scomp_free_ctx(struct acomp_req *req);
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index 6eb06101089f..376c056447e7 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -362,6 +362,8 @@  struct compress_alg {
 			    unsigned int slen, u8 *dst, unsigned int *dlen);
 	int (*coa_decompress)(struct crypto_tfm *tfm, const u8 *src,
 			      unsigned int slen, u8 *dst, unsigned int *dlen);
+	int (*coa_zbufsize)(struct crypto_tfm *tfm, unsigned int slen,
+			    unsigned int *dlen);
 };
 
 
@@ -578,6 +580,8 @@  struct compress_tfm {
 	int (*cot_decompress)(struct crypto_tfm *tfm,
 	                      const u8 *src, unsigned int slen,
 	                      u8 *dst, unsigned int *dlen);
+	int (*cot_zbufsize)(struct crypto_tfm *tfm,
+			    unsigned int slen, unsigned int *dlen);
 };
 
 #define crt_ablkcipher	crt_u.ablkcipher
@@ -1660,5 +1664,13 @@  static inline int crypto_comp_decompress(struct crypto_comp *tfm,
 						    src, slen, dst, dlen);
 }
 
+static inline int crypto_comp_zbufsize(struct crypto_comp *tfm,
+				       unsigned int slen,
+				       unsigned int *dlen)
+{
+	return crypto_comp_crt(tfm)->cot_zbufsize(crypto_comp_tfm(tfm),
+						  slen, dlen);
+}
+
 #endif	/* _LINUX_CRYPTO_H */