Message ID | 20180906225854.40989-3-keescook@chromium.org (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Herbert Xu |
Headers | show |
Series | crypto: skcipher - Remove VLA usage | expand |
On Thu, Sep 06, 2018 at 03:58:52PM -0700, Kees Cook wrote: > > @@ -437,6 +442,12 @@ static inline struct crypto_skcipher *crypto_skcipher_reqtfm_check( > { > struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); > > + if (req->__onstack) { > + if (WARN_ON(crypto_skcipher_alg(tfm)->base.cra_flags & > + CRYPTO_ALG_ASYNC)) > + return ERR_PTR(-EINVAL); > + } Sorry but I don't like imposing a run-time check on everybody when stack-based requests are the odd ones out. If we're going to make this a run-time check (I'd much prefer a compile-time check, but I understand that this may involve too much churn), then please do it for stack-based request users only. Thanks,
On 7 September 2018 at 05:42, Herbert Xu <herbert@gondor.apana.org.au> wrote: > On Thu, Sep 06, 2018 at 03:58:52PM -0700, Kees Cook wrote: >> >> @@ -437,6 +442,12 @@ static inline struct crypto_skcipher *crypto_skcipher_reqtfm_check( >> { >> struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); >> >> + if (req->__onstack) { >> + if (WARN_ON(crypto_skcipher_alg(tfm)->base.cra_flags & >> + CRYPTO_ALG_ASYNC)) >> + return ERR_PTR(-EINVAL); >> + } > > Sorry but I don't like imposing a run-time check on everybody when > stack-based requests are the odd ones out. If we're going to make > this a run-time check (I'd much prefer a compile-time check, but I > understand that this may involve too much churn), then please do it > for stack-based request users only. > OK, so given that all SKCIPHER_REQUEST_ON_STACK occurrences are updated in this series anyway, perhaps we should add skcipher_[en|de]crypt_onstack() flavors that encapsulate the additional check? Only question is how to enforce at compile time that those are used instead of the ordinary ones when using a stack allocated request. Would you mind using some macro foo here involving __builtin_types_compatible_p() ?
On Thu, Sep 6, 2018 at 8:42 PM, Herbert Xu <herbert@gondor.apana.org.au> wrote: > On Thu, Sep 06, 2018 at 03:58:52PM -0700, Kees Cook wrote: >> >> @@ -437,6 +442,12 @@ static inline struct crypto_skcipher *crypto_skcipher_reqtfm_check( >> { >> struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); >> >> + if (req->__onstack) { >> + if (WARN_ON(crypto_skcipher_alg(tfm)->base.cra_flags & >> + CRYPTO_ALG_ASYNC)) >> + return ERR_PTR(-EINVAL); >> + } > > Sorry but I don't like imposing a run-time check on everybody when > stack-based requests are the odd ones out. If we're going to make > this a run-time check (I'd much prefer a compile-time check, but I > understand that this may involve too much churn), then please do it > for stack-based request users only. I'll continue to investigate alternatives, but I wanted to point out that the struct change actually fills an existing padding byte (so no change in memory usage) and marking this as an unlikely() test means it wouldn't even be measurable due to the branch predictor (so no change in speed). encrypt/decrypt entry is a tiny tiny fraction of the actual work done during encryption/decryption, etc. -Kees
On Fri, Sep 07, 2018 at 08:56:23AM +0200, Ard Biesheuvel wrote: > > OK, so given that all SKCIPHER_REQUEST_ON_STACK occurrences are > updated in this series anyway, perhaps we should add > skcipher_[en|de]crypt_onstack() flavors that encapsulate the > additional check? Only question is how to enforce at compile time that > those are used instead of the ordinary ones when using a stack > allocated request. Would you mind using some macro foo here involving > __builtin_types_compatible_p() ? Something like a completely new type which in reality is just a wrapper around skcipher: struct crypto_sync_skcipher { struct crypto_skcipher base; } tfm; tfm = crypto_alloc_sync_skcipher(...); crypto_sync_skcipher_encrypt(...) crypto_sync_skcipher_decrypt(...) These functions would just be trivial inline functions around their crypto_skcipher counterparts. Cheers,
On Fri, Sep 07, 2018 at 09:02:45AM -0700, Kees Cook wrote: > > I'll continue to investigate alternatives, but I wanted to point out > that the struct change actually fills an existing padding byte (so no > change in memory usage) and marking this as an unlikely() test means > it wouldn't even be measurable due to the branch predictor (so no > change in speed). encrypt/decrypt entry is a tiny tiny fraction of the > actual work done during encryption/decryption, etc. The point is the ON_STACK request stuff is purely for backwards compatibility and we don't want it to proliferate and pollute the core API. Cheers,
On Mon, Sep 10, 2018 at 10:52 PM, Herbert Xu <herbert@gondor.apana.org.au> wrote: > On Fri, Sep 07, 2018 at 08:56:23AM +0200, Ard Biesheuvel wrote: >> >> OK, so given that all SKCIPHER_REQUEST_ON_STACK occurrences are >> updated in this series anyway, perhaps we should add >> skcipher_[en|de]crypt_onstack() flavors that encapsulate the >> additional check? Only question is how to enforce at compile time that >> those are used instead of the ordinary ones when using a stack >> allocated request. Would you mind using some macro foo here involving >> __builtin_types_compatible_p() ? > > Something like a completely new type which in reality is just a > wrapper around skcipher: > > struct crypto_sync_skcipher { > struct crypto_skcipher base; > } tfm; > > tfm = crypto_alloc_sync_skcipher(...); > > crypto_sync_skcipher_encrypt(...) > crypto_sync_skcipher_decrypt(...) > > These functions would just be trivial inline functions around their > crypto_skcipher counterparts. This means new wrappers for the other helpers too, yes? For example: SKCIPHER_REQUEST_ON_STACK(nreq, ctx->null); skcipher_request_set_tfm(nreq, ctx->null); skcipher_request_set_callback(nreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(nreq, req->src, req->dst, nbytes, NULL); return crypto_skcipher_encrypt(nreq); For the above, we'd also need: sync_skcipher_request_set_tfm() sync_skcipher_request_set_callback() sync_skcipher_request_set_crypt() -Kees
On Thu, Sep 13, 2018 at 9:46 AM, Kees Cook <keescook@chromium.org> wrote: > On Mon, Sep 10, 2018 at 10:52 PM, Herbert Xu > <herbert@gondor.apana.org.au> wrote: >> On Fri, Sep 07, 2018 at 08:56:23AM +0200, Ard Biesheuvel wrote: >>> >>> OK, so given that all SKCIPHER_REQUEST_ON_STACK occurrences are >>> updated in this series anyway, perhaps we should add >>> skcipher_[en|de]crypt_onstack() flavors that encapsulate the >>> additional check? Only question is how to enforce at compile time that >>> those are used instead of the ordinary ones when using a stack >>> allocated request. Would you mind using some macro foo here involving >>> __builtin_types_compatible_p() ? >> >> Something like a completely new type which in reality is just a >> wrapper around skcipher: >> >> struct crypto_sync_skcipher { >> struct crypto_skcipher base; >> } tfm; >> >> tfm = crypto_alloc_sync_skcipher(...); >> >> crypto_sync_skcipher_encrypt(...) >> crypto_sync_skcipher_decrypt(...) >> >> These functions would just be trivial inline functions around their >> crypto_skcipher counterparts. > > This means new wrappers for the other helpers too, yes? For example: > > SKCIPHER_REQUEST_ON_STACK(nreq, ctx->null); > > skcipher_request_set_tfm(nreq, ctx->null); > skcipher_request_set_callback(nreq, req->base.flags, NULL, NULL); > skcipher_request_set_crypt(nreq, req->src, req->dst, nbytes, NULL); > > return crypto_skcipher_encrypt(nreq); > > For the above, we'd also need: > > sync_skcipher_request_set_tfm() > sync_skcipher_request_set_callback() > sync_skcipher_request_set_crypt() Wait, I think I misunderstood you. Did you mean a new top-level thing (tfm?) not a new request type? That would mean at least replacing skcipher_request_set_tfm() with a wrapper (since the tfm argument is different), but _not_ encrypt/decrypt like you mention. I could perform a type test in SKCIPHER_REQUEST_ON_STACK(). Globally: - add struct crypto_sync_skcipher wrapper - add crypto_alloc_sync_skcipher() to check non-ASYNC and request size of actual tfm - add skcipher_request_set_sync_tfm() to attach the wrapped tfm to the request - SKCIPHER_REQUEST_ON_STACK() would verify the tfm was a struct crypto_sync_skcipher. Two changes per user: - change allocation to use new crypto_alloc_sync_skcipher() which does the runtime checking - change skcipher_request_set_tfm() to skcipher_request_set_sync_tfm() This means struct skcipher_request is unchanged, along with _set_callback, _set_crypt, _zero, and en/decrypt. API misuse would be caught at build-time (via SKCIPHER_REQUEST_ON_STACK type checking) and any request size problems would be caught at allocation time. Does this sound like what you had in mind? -Kees
diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h index 6e954d398e0f..3aabd5d098ed 100644 --- a/include/crypto/skcipher.h +++ b/include/crypto/skcipher.h @@ -19,6 +19,7 @@ /** * struct skcipher_request - Symmetric key cipher request + * @__onstack: 1 if the request was allocated by SKCIPHER_REQUEST_ON_STACK * @cryptlen: Number of bytes to encrypt or decrypt * @iv: Initialisation Vector * @src: Source SG list @@ -27,6 +28,7 @@ * @__ctx: Start of private context data */ struct skcipher_request { + unsigned char __onstack; unsigned int cryptlen; u8 *iv; @@ -139,9 +141,12 @@ struct skcipher_alg { struct crypto_alg base; }; +/* + * This must only ever be used with synchronous algorithms. + */ #define SKCIPHER_REQUEST_ON_STACK(name, tfm) \ char __##name##_desc[sizeof(struct skcipher_request) + \ - crypto_skcipher_reqsize(tfm)] CRYPTO_MINALIGN_ATTR; \ + crypto_skcipher_reqsize(tfm)] CRYPTO_MINALIGN_ATTR = { 1 }; \ struct skcipher_request *name = (void *)__##name##_desc /** @@ -437,6 +442,12 @@ static inline struct crypto_skcipher *crypto_skcipher_reqtfm_check( { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + if (req->__onstack) { + if (WARN_ON(crypto_skcipher_alg(tfm)->base.cra_flags & + CRYPTO_ALG_ASYNC)) + return ERR_PTR(-EINVAL); + } + if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) return ERR_PTR(-ENOKEY);
Check at use-time whether an skcipher request is on the stack. If it is, enforce that it must be backed by a synchronous algorithm, as is required: https://www.redhat.com/archives/dm-devel/2018-January/msg00087.html Co-developed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Kees Cook <keescook@chromium.org> --- include/crypto/skcipher.h | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-)