From patchwork Wed Sep 19 02:10:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605175 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AA0B56CB for ; Wed, 19 Sep 2018 02:12:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9864A2BF25 for ; Wed, 19 Sep 2018 02:12:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8C3492BFC2; Wed, 19 Sep 2018 02:12:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EF7C02BF25 for ; Wed, 19 Sep 2018 02:12:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730949AbeISHsP (ORCPT ); Wed, 19 Sep 2018 03:48:15 -0400 Received: from mail-pg1-f193.google.com ([209.85.215.193]:42046 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730880AbeISHqi (ORCPT ); Wed, 19 Sep 2018 03:46:38 -0400 Received: by mail-pg1-f193.google.com with SMTP id y4-v6so1933349pgp.9 for ; Tue, 18 Sep 2018 19:11:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=c98RDliaBM1Al7+gORhHfTCw2qffI9Ve+rlOf/LlvLY=; b=Z6PstyLTELjnaaxIur6pJLBgFzXM+Me5h1MgQOpQgbXGSDTaojCkZeLHgdK37NUaWp 3GM416S0BzKQKwMrXQOj6oWG4EMxgSzZ9/UKKAF7T4WTbPL1XXg1jhmb1jkWpC+cRxlm x2jsSqy8avqOE1Dv2S+6GXt9auGrmXk4EzepQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=c98RDliaBM1Al7+gORhHfTCw2qffI9Ve+rlOf/LlvLY=; b=uNXyUjb26vkATaswxTzrD1FqCE1CWDrhTnPpa+NlXrtoKgLalgF58n50BrXZokkMej UYblYCCk/qtBwCYhU2oUgCiHzNpn2o41IcGoONy4GGrtYf64iwrx3y1/Av7oF1/yyY1x iVqsEc/oPAtNJ4Hg3Z198bw8SHwrRQwBTyjOgQD7MmXEIxuV9xnznh5RC3EwjFzDAmND 0/KVFYMyVvmG7QeqQZfuEKmhNLlSseeDhgLtPwqY8Sq83eUndIHiYot8pN5KrR+HVlCK 0fqK+DmeBbIns80MgxngoLN5UiI5KV77fU2MztL7V4AC+UUpNPpEH1rrh4e/ynlIiI3M az7A== X-Gm-Message-State: APzg51CtMVk+2pr1JwGEDmIrJtit8l1hTE+2bKnvfOV6CJ1VCee9+2Rr v59Ob+y+UIV0pcwv0paYgxthXQ== X-Google-Smtp-Source: ANB0VdZXPU16QhuxwFHlaQoIx6U+bUZfeBhvS9AqDKXZT9uxtaYVkh405ZCzYOnJE+O6t0EysxasVA== X-Received: by 2002:a63:db4f:: with SMTP id x15-v6mr28726821pgi.214.1537323070207; Tue, 18 Sep 2018 19:11:10 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id k23-v6sm21139152pgl.42.2018.09.18.19.11.04 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:11:05 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 01/23] crypto: skcipher - Introduce crypto_sync_skcipher Date: Tue, 18 Sep 2018 19:10:38 -0700 Message-Id: <20180919021100.3380-2-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In preparation for removal of VLAs due to skcipher requests on the stack via SKCIPHER_REQUEST_ON_STACK() usage, this introduces the infrastructure for the "sync skcipher" tfm, which is for handling the on-stack cases of skcipher, which are always non-ASYNC and have a known limited request size. The crypto API additions: struct crypto_sync_skcipher (wrapper for struct crypto_skcipher) crypto_alloc_sync_skcipher() crypto_free_sync_skcipher() crypto_sync_skcipher_setkey() crypto_sync_skcipher_get_flags() crypto_sync_skcipher_set_flags() crypto_sync_skcipher_clear_flags() crypto_sync_skcipher_blocksize() crypto_sync_skcipher_ivsize() crypto_sync_skcipher_reqtfm() skcipher_request_set_sync_tfm() SYNC_SKCIPHER_REQUEST_ON_STACK() (with tfm type check) Signed-off-by: Kees Cook Reviewed-by: Ard Biesheuvel --- crypto/skcipher.c | 24 +++++++++++++ include/crypto/skcipher.h | 75 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 99 insertions(+) diff --git a/crypto/skcipher.c b/crypto/skcipher.c index 0bd8c6caa498..4caab81d2d02 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -949,6 +949,30 @@ struct crypto_skcipher *crypto_alloc_skcipher(const char *alg_name, } EXPORT_SYMBOL_GPL(crypto_alloc_skcipher); +struct crypto_sync_skcipher *crypto_alloc_sync_skcipher( + const char *alg_name, u32 type, u32 mask) +{ + struct crypto_skcipher *tfm; + + /* Only sync algorithms allowed. */ + mask |= CRYPTO_ALG_ASYNC; + + tfm = crypto_alloc_tfm(alg_name, &crypto_skcipher_type2, type, mask); + + /* + * Make sure we do not allocate something that might get used with + * an on-stack request: check the request size. + */ + if (!IS_ERR(tfm) && WARN_ON(crypto_skcipher_reqsize(tfm) > + MAX_SYNC_SKCIPHER_REQSIZE)) { + crypto_free_skcipher(tfm); + return ERR_PTR(-EINVAL); + } + + return (struct crypto_sync_skcipher *)tfm; +} +EXPORT_SYMBOL_GPL(crypto_alloc_sync_skcipher); + int crypto_has_skcipher2(const char *alg_name, u32 type, u32 mask) { return crypto_type_has_alg(alg_name, &crypto_skcipher_type2, diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h index 2f327f090c3e..d00ce90dc7da 100644 --- a/include/crypto/skcipher.h +++ b/include/crypto/skcipher.h @@ -65,6 +65,10 @@ struct crypto_skcipher { struct crypto_tfm base; }; +struct crypto_sync_skcipher { + struct crypto_skcipher base; +}; + /** * struct skcipher_alg - symmetric key cipher definition * @min_keysize: Minimum key size supported by the transformation. This is the @@ -139,6 +143,19 @@ struct skcipher_alg { struct crypto_alg base; }; +#define MAX_SYNC_SKCIPHER_REQSIZE 384 +/* + * This performs a type-check against the "tfm" argument to make sure + * all users have the correct skcipher tfm for doing on-stack requests. + */ +#define SYNC_SKCIPHER_REQUEST_ON_STACK(name, tfm) \ + char __##name##_desc[sizeof(struct skcipher_request) + \ + MAX_SYNC_SKCIPHER_REQSIZE + \ + (!(sizeof((struct crypto_sync_skcipher *)1 == \ + (typeof(tfm))1))) \ + ] CRYPTO_MINALIGN_ATTR; \ + struct skcipher_request *name = (void *)__##name##_desc + #define SKCIPHER_REQUEST_ON_STACK(name, tfm) \ char __##name##_desc[sizeof(struct skcipher_request) + \ crypto_skcipher_reqsize(tfm)] CRYPTO_MINALIGN_ATTR; \ @@ -197,6 +214,9 @@ static inline struct crypto_skcipher *__crypto_skcipher_cast( struct crypto_skcipher *crypto_alloc_skcipher(const char *alg_name, u32 type, u32 mask); +struct crypto_sync_skcipher *crypto_alloc_sync_skcipher(const char *alg_name, + u32 type, u32 mask); + static inline struct crypto_tfm *crypto_skcipher_tfm( struct crypto_skcipher *tfm) { @@ -212,6 +232,11 @@ static inline void crypto_free_skcipher(struct crypto_skcipher *tfm) crypto_destroy_tfm(tfm, crypto_skcipher_tfm(tfm)); } +static inline void crypto_free_sync_skcipher(struct crypto_sync_skcipher *tfm) +{ + crypto_free_skcipher(&tfm->base); +} + /** * crypto_has_skcipher() - Search for the availability of an skcipher. * @alg_name: is the cra_name / name or cra_driver_name / driver name of the @@ -280,6 +305,12 @@ static inline unsigned int crypto_skcipher_ivsize(struct crypto_skcipher *tfm) return tfm->ivsize; } +static inline unsigned int crypto_sync_skcipher_ivsize( + struct crypto_sync_skcipher *tfm) +{ + return crypto_skcipher_ivsize(&tfm->base); +} + static inline unsigned int crypto_skcipher_alg_chunksize( struct skcipher_alg *alg) { @@ -356,6 +387,12 @@ static inline unsigned int crypto_skcipher_blocksize( return crypto_tfm_alg_blocksize(crypto_skcipher_tfm(tfm)); } +static inline unsigned int crypto_sync_skcipher_blocksize( + struct crypto_sync_skcipher *tfm) +{ + return crypto_skcipher_blocksize(&tfm->base); +} + static inline unsigned int crypto_skcipher_alignmask( struct crypto_skcipher *tfm) { @@ -379,6 +416,24 @@ static inline void crypto_skcipher_clear_flags(struct crypto_skcipher *tfm, crypto_tfm_clear_flags(crypto_skcipher_tfm(tfm), flags); } +static inline u32 crypto_sync_skcipher_get_flags( + struct crypto_sync_skcipher *tfm) +{ + return crypto_skcipher_get_flags(&tfm->base); +} + +static inline void crypto_sync_skcipher_set_flags( + struct crypto_sync_skcipher *tfm, u32 flags) +{ + crypto_skcipher_set_flags(&tfm->base, flags); +} + +static inline void crypto_sync_skcipher_clear_flags( + struct crypto_sync_skcipher *tfm, u32 flags) +{ + crypto_skcipher_clear_flags(&tfm->base, flags); +} + /** * crypto_skcipher_setkey() - set key for cipher * @tfm: cipher handle @@ -401,6 +456,12 @@ static inline int crypto_skcipher_setkey(struct crypto_skcipher *tfm, return tfm->setkey(tfm, key, keylen); } +static inline int crypto_sync_skcipher_setkey(struct crypto_sync_skcipher *tfm, + const u8 *key, unsigned int keylen) +{ + return crypto_skcipher_setkey(&tfm->base, key, keylen); +} + static inline unsigned int crypto_skcipher_default_keysize( struct crypto_skcipher *tfm) { @@ -422,6 +483,14 @@ static inline struct crypto_skcipher *crypto_skcipher_reqtfm( return __crypto_skcipher_cast(req->base.tfm); } +static inline struct crypto_sync_skcipher *crypto_sync_skcipher_reqtfm( + struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + + return container_of(tfm, struct crypto_sync_skcipher, base); +} + /** * crypto_skcipher_encrypt() - encrypt plaintext * @req: reference to the skcipher_request handle that holds all information @@ -500,6 +569,12 @@ static inline void skcipher_request_set_tfm(struct skcipher_request *req, req->base.tfm = crypto_skcipher_tfm(tfm); } +static inline void skcipher_request_set_sync_tfm(struct skcipher_request *req, + struct crypto_sync_skcipher *tfm) +{ + skcipher_request_set_tfm(req, &tfm->base); +} + static inline struct skcipher_request *skcipher_request_cast( struct crypto_async_request *req) { From patchwork Wed Sep 19 02:10:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605135 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1F9FF6CB for ; Wed, 19 Sep 2018 02:11:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 049222BFEF for ; Wed, 19 Sep 2018 02:11:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 028722C078; Wed, 19 Sep 2018 02:11:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 270CF2BFEF for ; Wed, 19 Sep 2018 02:11:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730667AbeISHqe (ORCPT ); Wed, 19 Sep 2018 03:46:34 -0400 Received: from mail-pl1-f193.google.com ([209.85.214.193]:39746 "EHLO mail-pl1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728226AbeISHqe (ORCPT ); Wed, 19 Sep 2018 03:46:34 -0400 Received: by mail-pl1-f193.google.com with SMTP id w14-v6so1862546plp.6 for ; Tue, 18 Sep 2018 19:11:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=JpYnn3+r3WIK9m2uxvWFH2eKKLnBIBJN0RhewxeJaEk=; b=NbzDROzaWFMCPJy7JM9yLbNLx2drhspfpYqJEdLDpPyL7Jd95j9YLYSqFwDFha27wU 1GfvptzFkna+kNH3DIq6ptZvqrBfOMBErxmU1EgkbR5I0lfsMZcjg5pDf76AW3EDZBcj QLSscQTLXtAQ5dCyUWuheQ7omoCYwVbL8Ifgw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=JpYnn3+r3WIK9m2uxvWFH2eKKLnBIBJN0RhewxeJaEk=; b=XMB/xK476D/L82FZa19qAhDuNSWRNjhscOT3Lr5vNZBC4ETYFMvlf1oDL3pJtSrFUC k/qHtw9NWv9/0U6kPoQVO5D9FEwtm47A4XKTNvn6+LrYBDPm7kMEBA2jTQVy6ICUSXJ0 gX5rKkBoX7x2Y7N2fVzja5hceJE6VLP0W04lBA9bY0bUsbU+Ikl5WrGdW5d9cNuPKpnz qi0jkzJlylLUU5QPLFQ/DaWJKx5pLhW3ecMnV250RohTw4krzdKFRsCqXBvdhctSUr74 Fv3KcakZO9FnoFTA4XkffvQc/h9PP2vqUkKEdZnZ/Gm9KInWVBIQXLu0UnFOrOAyMRfx 8Ykw== X-Gm-Message-State: APzg51CVLnQ0/keQIvG1AmfO7zeRpff9Un1jL93FExC9CAlMEA+zgbE/ Tkz1/TXcB5Sya12S5JJZjVi1Cw== X-Google-Smtp-Source: ANB0VdbMaUOi5LF5NSOopFe9y2WB3Pms2T9iyYu5y0ZuMvKZh1hqL01Q/4IPJ5OJQCtW1vaMmis/Cw== X-Received: by 2002:a17:902:6b0b:: with SMTP id o11-v6mr31936301plk.214.1537323066718; Tue, 18 Sep 2018 19:11:06 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id h130-v6sm24497825pgc.88.2018.09.18.19.11.04 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:11:05 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , Trond Myklebust , Anna Schumaker , "J. Bruce Fields" , Jeff Layton , YueHaibing , linux-nfs@vger.kernel.org, Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 02/23] gss_krb5: Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:39 -0700 Message-Id: <20180919021100.3380-3-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: Trond Myklebust Cc: Anna Schumaker Cc: "J. Bruce Fields" Cc: Jeff Layton Cc: YueHaibing Cc: linux-nfs@vger.kernel.org Signed-off-by: Kees Cook --- include/linux/sunrpc/gss_krb5.h | 30 ++++----- net/sunrpc/auth_gss/gss_krb5_crypto.c | 87 ++++++++++++++------------- net/sunrpc/auth_gss/gss_krb5_keys.c | 9 ++- net/sunrpc/auth_gss/gss_krb5_mech.c | 53 ++++++++-------- net/sunrpc/auth_gss/gss_krb5_seqnum.c | 18 +++--- net/sunrpc/auth_gss/gss_krb5_wrap.c | 20 +++--- 6 files changed, 108 insertions(+), 109 deletions(-) diff --git a/include/linux/sunrpc/gss_krb5.h b/include/linux/sunrpc/gss_krb5.h index 7df625d41e35..f6e8ceafafd8 100644 --- a/include/linux/sunrpc/gss_krb5.h +++ b/include/linux/sunrpc/gss_krb5.h @@ -71,10 +71,10 @@ struct gss_krb5_enctype { const u32 keyed_cksum; /* is it a keyed cksum? */ const u32 keybytes; /* raw key len, in bytes */ const u32 keylength; /* final key len, in bytes */ - u32 (*encrypt) (struct crypto_skcipher *tfm, + u32 (*encrypt) (struct crypto_sync_skcipher *tfm, void *iv, void *in, void *out, int length); /* encryption function */ - u32 (*decrypt) (struct crypto_skcipher *tfm, + u32 (*decrypt) (struct crypto_sync_skcipher *tfm, void *iv, void *in, void *out, int length); /* decryption function */ u32 (*mk_key) (const struct gss_krb5_enctype *gk5e, @@ -98,12 +98,12 @@ struct krb5_ctx { u32 enctype; u32 flags; const struct gss_krb5_enctype *gk5e; /* enctype-specific info */ - struct crypto_skcipher *enc; - struct crypto_skcipher *seq; - struct crypto_skcipher *acceptor_enc; - struct crypto_skcipher *initiator_enc; - struct crypto_skcipher *acceptor_enc_aux; - struct crypto_skcipher *initiator_enc_aux; + struct crypto_sync_skcipher *enc; + struct crypto_sync_skcipher *seq; + struct crypto_sync_skcipher *acceptor_enc; + struct crypto_sync_skcipher *initiator_enc; + struct crypto_sync_skcipher *acceptor_enc_aux; + struct crypto_sync_skcipher *initiator_enc_aux; u8 Ksess[GSS_KRB5_MAX_KEYLEN]; /* session key */ u8 cksum[GSS_KRB5_MAX_KEYLEN]; s32 endtime; @@ -262,24 +262,24 @@ gss_unwrap_kerberos(struct gss_ctx *ctx_id, int offset, u32 -krb5_encrypt(struct crypto_skcipher *key, +krb5_encrypt(struct crypto_sync_skcipher *key, void *iv, void *in, void *out, int length); u32 -krb5_decrypt(struct crypto_skcipher *key, +krb5_decrypt(struct crypto_sync_skcipher *key, void *iv, void *in, void *out, int length); int -gss_encrypt_xdr_buf(struct crypto_skcipher *tfm, struct xdr_buf *outbuf, +gss_encrypt_xdr_buf(struct crypto_sync_skcipher *tfm, struct xdr_buf *outbuf, int offset, struct page **pages); int -gss_decrypt_xdr_buf(struct crypto_skcipher *tfm, struct xdr_buf *inbuf, +gss_decrypt_xdr_buf(struct crypto_sync_skcipher *tfm, struct xdr_buf *inbuf, int offset); s32 krb5_make_seq_num(struct krb5_ctx *kctx, - struct crypto_skcipher *key, + struct crypto_sync_skcipher *key, int direction, u32 seqnum, unsigned char *cksum, unsigned char *buf); @@ -320,12 +320,12 @@ gss_krb5_aes_decrypt(struct krb5_ctx *kctx, u32 offset, int krb5_rc4_setup_seq_key(struct krb5_ctx *kctx, - struct crypto_skcipher *cipher, + struct crypto_sync_skcipher *cipher, unsigned char *cksum); int krb5_rc4_setup_enc_key(struct krb5_ctx *kctx, - struct crypto_skcipher *cipher, + struct crypto_sync_skcipher *cipher, s32 seqnum); void gss_krb5_make_confounder(char *p, u32 conflen); diff --git a/net/sunrpc/auth_gss/gss_krb5_crypto.c b/net/sunrpc/auth_gss/gss_krb5_crypto.c index 0220e1ca5280..4f43383971ba 100644 --- a/net/sunrpc/auth_gss/gss_krb5_crypto.c +++ b/net/sunrpc/auth_gss/gss_krb5_crypto.c @@ -53,7 +53,7 @@ u32 krb5_encrypt( - struct crypto_skcipher *tfm, + struct crypto_sync_skcipher *tfm, void * iv, void * in, void * out, @@ -62,24 +62,24 @@ krb5_encrypt( u32 ret = -EINVAL; struct scatterlist sg[1]; u8 local_iv[GSS_KRB5_MAX_BLOCKSIZE] = {0}; - SKCIPHER_REQUEST_ON_STACK(req, tfm); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm); - if (length % crypto_skcipher_blocksize(tfm) != 0) + if (length % crypto_sync_skcipher_blocksize(tfm) != 0) goto out; - if (crypto_skcipher_ivsize(tfm) > GSS_KRB5_MAX_BLOCKSIZE) { + if (crypto_sync_skcipher_ivsize(tfm) > GSS_KRB5_MAX_BLOCKSIZE) { dprintk("RPC: gss_k5encrypt: tfm iv size too large %d\n", - crypto_skcipher_ivsize(tfm)); + crypto_sync_skcipher_ivsize(tfm)); goto out; } if (iv) - memcpy(local_iv, iv, crypto_skcipher_ivsize(tfm)); + memcpy(local_iv, iv, crypto_sync_skcipher_ivsize(tfm)); memcpy(out, in, length); sg_init_one(sg, out, length); - skcipher_request_set_tfm(req, tfm); + skcipher_request_set_sync_tfm(req, tfm); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, sg, sg, length, local_iv); @@ -92,7 +92,7 @@ krb5_encrypt( u32 krb5_decrypt( - struct crypto_skcipher *tfm, + struct crypto_sync_skcipher *tfm, void * iv, void * in, void * out, @@ -101,23 +101,23 @@ krb5_decrypt( u32 ret = -EINVAL; struct scatterlist sg[1]; u8 local_iv[GSS_KRB5_MAX_BLOCKSIZE] = {0}; - SKCIPHER_REQUEST_ON_STACK(req, tfm); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm); - if (length % crypto_skcipher_blocksize(tfm) != 0) + if (length % crypto_sync_skcipher_blocksize(tfm) != 0) goto out; - if (crypto_skcipher_ivsize(tfm) > GSS_KRB5_MAX_BLOCKSIZE) { + if (crypto_sync_skcipher_ivsize(tfm) > GSS_KRB5_MAX_BLOCKSIZE) { dprintk("RPC: gss_k5decrypt: tfm iv size too large %d\n", - crypto_skcipher_ivsize(tfm)); + crypto_sync_skcipher_ivsize(tfm)); goto out; } if (iv) - memcpy(local_iv,iv, crypto_skcipher_ivsize(tfm)); + memcpy(local_iv, iv, crypto_sync_skcipher_ivsize(tfm)); memcpy(out, in, length); sg_init_one(sg, out, length); - skcipher_request_set_tfm(req, tfm); + skcipher_request_set_sync_tfm(req, tfm); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, sg, sg, length, local_iv); @@ -466,7 +466,8 @@ encryptor(struct scatterlist *sg, void *data) { struct encryptor_desc *desc = data; struct xdr_buf *outbuf = desc->outbuf; - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(desc->req); + struct crypto_sync_skcipher *tfm = + crypto_sync_skcipher_reqtfm(desc->req); struct page *in_page; int thislen = desc->fraglen + sg->length; int fraglen, ret; @@ -492,7 +493,7 @@ encryptor(struct scatterlist *sg, void *data) desc->fraglen += sg->length; desc->pos += sg->length; - fraglen = thislen & (crypto_skcipher_blocksize(tfm) - 1); + fraglen = thislen & (crypto_sync_skcipher_blocksize(tfm) - 1); thislen -= fraglen; if (thislen == 0) @@ -526,16 +527,16 @@ encryptor(struct scatterlist *sg, void *data) } int -gss_encrypt_xdr_buf(struct crypto_skcipher *tfm, struct xdr_buf *buf, +gss_encrypt_xdr_buf(struct crypto_sync_skcipher *tfm, struct xdr_buf *buf, int offset, struct page **pages) { int ret; struct encryptor_desc desc; - SKCIPHER_REQUEST_ON_STACK(req, tfm); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm); - BUG_ON((buf->len - offset) % crypto_skcipher_blocksize(tfm) != 0); + BUG_ON((buf->len - offset) % crypto_sync_skcipher_blocksize(tfm) != 0); - skcipher_request_set_tfm(req, tfm); + skcipher_request_set_sync_tfm(req, tfm); skcipher_request_set_callback(req, 0, NULL, NULL); memset(desc.iv, 0, sizeof(desc.iv)); @@ -567,7 +568,8 @@ decryptor(struct scatterlist *sg, void *data) { struct decryptor_desc *desc = data; int thislen = desc->fraglen + sg->length; - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(desc->req); + struct crypto_sync_skcipher *tfm = + crypto_sync_skcipher_reqtfm(desc->req); int fraglen, ret; /* Worst case is 4 fragments: head, end of page 1, start @@ -578,7 +580,7 @@ decryptor(struct scatterlist *sg, void *data) desc->fragno++; desc->fraglen += sg->length; - fraglen = thislen & (crypto_skcipher_blocksize(tfm) - 1); + fraglen = thislen & (crypto_sync_skcipher_blocksize(tfm) - 1); thislen -= fraglen; if (thislen == 0) @@ -608,17 +610,17 @@ decryptor(struct scatterlist *sg, void *data) } int -gss_decrypt_xdr_buf(struct crypto_skcipher *tfm, struct xdr_buf *buf, +gss_decrypt_xdr_buf(struct crypto_sync_skcipher *tfm, struct xdr_buf *buf, int offset) { int ret; struct decryptor_desc desc; - SKCIPHER_REQUEST_ON_STACK(req, tfm); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm); /* XXXJBF: */ - BUG_ON((buf->len - offset) % crypto_skcipher_blocksize(tfm) != 0); + BUG_ON((buf->len - offset) % crypto_sync_skcipher_blocksize(tfm) != 0); - skcipher_request_set_tfm(req, tfm); + skcipher_request_set_sync_tfm(req, tfm); skcipher_request_set_callback(req, 0, NULL, NULL); memset(desc.iv, 0, sizeof(desc.iv)); @@ -672,12 +674,12 @@ xdr_extend_head(struct xdr_buf *buf, unsigned int base, unsigned int shiftlen) } static u32 -gss_krb5_cts_crypt(struct crypto_skcipher *cipher, struct xdr_buf *buf, +gss_krb5_cts_crypt(struct crypto_sync_skcipher *cipher, struct xdr_buf *buf, u32 offset, u8 *iv, struct page **pages, int encrypt) { u32 ret; struct scatterlist sg[1]; - SKCIPHER_REQUEST_ON_STACK(req, cipher); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, cipher); u8 *data; struct page **save_pages; u32 len = buf->len - offset; @@ -706,7 +708,7 @@ gss_krb5_cts_crypt(struct crypto_skcipher *cipher, struct xdr_buf *buf, sg_init_one(sg, data, len); - skcipher_request_set_tfm(req, cipher); + skcipher_request_set_sync_tfm(req, cipher); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, sg, sg, len, iv); @@ -735,7 +737,7 @@ gss_krb5_aes_encrypt(struct krb5_ctx *kctx, u32 offset, struct xdr_netobj hmac; u8 *cksumkey; u8 *ecptr; - struct crypto_skcipher *cipher, *aux_cipher; + struct crypto_sync_skcipher *cipher, *aux_cipher; int blocksize; struct page **save_pages; int nblocks, nbytes; @@ -754,7 +756,7 @@ gss_krb5_aes_encrypt(struct krb5_ctx *kctx, u32 offset, cksumkey = kctx->acceptor_integ; usage = KG_USAGE_ACCEPTOR_SEAL; } - blocksize = crypto_skcipher_blocksize(cipher); + blocksize = crypto_sync_skcipher_blocksize(cipher); /* hide the gss token header and insert the confounder */ offset += GSS_KRB5_TOK_HDR_LEN; @@ -807,7 +809,7 @@ gss_krb5_aes_encrypt(struct krb5_ctx *kctx, u32 offset, memset(desc.iv, 0, sizeof(desc.iv)); if (cbcbytes) { - SKCIPHER_REQUEST_ON_STACK(req, aux_cipher); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, aux_cipher); desc.pos = offset + GSS_KRB5_TOK_HDR_LEN; desc.fragno = 0; @@ -816,7 +818,7 @@ gss_krb5_aes_encrypt(struct krb5_ctx *kctx, u32 offset, desc.outbuf = buf; desc.req = req; - skcipher_request_set_tfm(req, aux_cipher); + skcipher_request_set_sync_tfm(req, aux_cipher); skcipher_request_set_callback(req, 0, NULL, NULL); sg_init_table(desc.infrags, 4); @@ -855,7 +857,7 @@ gss_krb5_aes_decrypt(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf, struct xdr_buf subbuf; u32 ret = 0; u8 *cksum_key; - struct crypto_skcipher *cipher, *aux_cipher; + struct crypto_sync_skcipher *cipher, *aux_cipher; struct xdr_netobj our_hmac_obj; u8 our_hmac[GSS_KRB5_MAX_CKSUM_LEN]; u8 pkt_hmac[GSS_KRB5_MAX_CKSUM_LEN]; @@ -874,7 +876,7 @@ gss_krb5_aes_decrypt(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf, cksum_key = kctx->initiator_integ; usage = KG_USAGE_INITIATOR_SEAL; } - blocksize = crypto_skcipher_blocksize(cipher); + blocksize = crypto_sync_skcipher_blocksize(cipher); /* create a segment skipping the header and leaving out the checksum */ @@ -891,13 +893,13 @@ gss_krb5_aes_decrypt(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf, memset(desc.iv, 0, sizeof(desc.iv)); if (cbcbytes) { - SKCIPHER_REQUEST_ON_STACK(req, aux_cipher); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, aux_cipher); desc.fragno = 0; desc.fraglen = 0; desc.req = req; - skcipher_request_set_tfm(req, aux_cipher); + skcipher_request_set_sync_tfm(req, aux_cipher); skcipher_request_set_callback(req, 0, NULL, NULL); sg_init_table(desc.frags, 4); @@ -946,7 +948,8 @@ gss_krb5_aes_decrypt(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf, * Set the key of the given cipher. */ int -krb5_rc4_setup_seq_key(struct krb5_ctx *kctx, struct crypto_skcipher *cipher, +krb5_rc4_setup_seq_key(struct krb5_ctx *kctx, + struct crypto_sync_skcipher *cipher, unsigned char *cksum) { struct crypto_shash *hmac; @@ -994,7 +997,7 @@ krb5_rc4_setup_seq_key(struct krb5_ctx *kctx, struct crypto_skcipher *cipher, if (err) goto out_err; - err = crypto_skcipher_setkey(cipher, Kseq, kctx->gk5e->keylength); + err = crypto_sync_skcipher_setkey(cipher, Kseq, kctx->gk5e->keylength); if (err) goto out_err; @@ -1012,7 +1015,8 @@ krb5_rc4_setup_seq_key(struct krb5_ctx *kctx, struct crypto_skcipher *cipher, * Set the key of cipher kctx->enc. */ int -krb5_rc4_setup_enc_key(struct krb5_ctx *kctx, struct crypto_skcipher *cipher, +krb5_rc4_setup_enc_key(struct krb5_ctx *kctx, + struct crypto_sync_skcipher *cipher, s32 seqnum) { struct crypto_shash *hmac; @@ -1069,7 +1073,8 @@ krb5_rc4_setup_enc_key(struct krb5_ctx *kctx, struct crypto_skcipher *cipher, if (err) goto out_err; - err = crypto_skcipher_setkey(cipher, Kcrypt, kctx->gk5e->keylength); + err = crypto_sync_skcipher_setkey(cipher, Kcrypt, + kctx->gk5e->keylength); if (err) goto out_err; diff --git a/net/sunrpc/auth_gss/gss_krb5_keys.c b/net/sunrpc/auth_gss/gss_krb5_keys.c index f7fe2d2b851f..550fdf18d3b3 100644 --- a/net/sunrpc/auth_gss/gss_krb5_keys.c +++ b/net/sunrpc/auth_gss/gss_krb5_keys.c @@ -147,7 +147,7 @@ u32 krb5_derive_key(const struct gss_krb5_enctype *gk5e, size_t blocksize, keybytes, keylength, n; unsigned char *inblockdata, *outblockdata, *rawkey; struct xdr_netobj inblock, outblock; - struct crypto_skcipher *cipher; + struct crypto_sync_skcipher *cipher; u32 ret = EINVAL; blocksize = gk5e->blocksize; @@ -157,11 +157,10 @@ u32 krb5_derive_key(const struct gss_krb5_enctype *gk5e, if ((inkey->len != keylength) || (outkey->len != keylength)) goto err_return; - cipher = crypto_alloc_skcipher(gk5e->encrypt_name, 0, - CRYPTO_ALG_ASYNC); + cipher = crypto_alloc_sync_skcipher(gk5e->encrypt_name, 0, 0); if (IS_ERR(cipher)) goto err_return; - if (crypto_skcipher_setkey(cipher, inkey->data, inkey->len)) + if (crypto_sync_skcipher_setkey(cipher, inkey->data, inkey->len)) goto err_return; /* allocate and set up buffers */ @@ -238,7 +237,7 @@ u32 krb5_derive_key(const struct gss_krb5_enctype *gk5e, memset(inblockdata, 0, blocksize); kfree(inblockdata); err_free_cipher: - crypto_free_skcipher(cipher); + crypto_free_sync_skcipher(cipher); err_return: return ret; } diff --git a/net/sunrpc/auth_gss/gss_krb5_mech.c b/net/sunrpc/auth_gss/gss_krb5_mech.c index 7bb2514aadd9..7f0424dfa8f6 100644 --- a/net/sunrpc/auth_gss/gss_krb5_mech.c +++ b/net/sunrpc/auth_gss/gss_krb5_mech.c @@ -218,7 +218,7 @@ simple_get_netobj(const void *p, const void *end, struct xdr_netobj *res) static inline const void * get_key(const void *p, const void *end, - struct krb5_ctx *ctx, struct crypto_skcipher **res) + struct krb5_ctx *ctx, struct crypto_sync_skcipher **res) { struct xdr_netobj key; int alg; @@ -246,15 +246,14 @@ get_key(const void *p, const void *end, if (IS_ERR(p)) goto out_err; - *res = crypto_alloc_skcipher(ctx->gk5e->encrypt_name, 0, - CRYPTO_ALG_ASYNC); + *res = crypto_alloc_sync_skcipher(ctx->gk5e->encrypt_name, 0, 0); if (IS_ERR(*res)) { printk(KERN_WARNING "gss_kerberos_mech: unable to initialize " "crypto algorithm %s\n", ctx->gk5e->encrypt_name); *res = NULL; goto out_err_free_key; } - if (crypto_skcipher_setkey(*res, key.data, key.len)) { + if (crypto_sync_skcipher_setkey(*res, key.data, key.len)) { printk(KERN_WARNING "gss_kerberos_mech: error setting key for " "crypto algorithm %s\n", ctx->gk5e->encrypt_name); goto out_err_free_tfm; @@ -264,7 +263,7 @@ get_key(const void *p, const void *end, return p; out_err_free_tfm: - crypto_free_skcipher(*res); + crypto_free_sync_skcipher(*res); out_err_free_key: kfree(key.data); p = ERR_PTR(-EINVAL); @@ -336,30 +335,30 @@ gss_import_v1_context(const void *p, const void *end, struct krb5_ctx *ctx) return 0; out_err_free_key2: - crypto_free_skcipher(ctx->seq); + crypto_free_sync_skcipher(ctx->seq); out_err_free_key1: - crypto_free_skcipher(ctx->enc); + crypto_free_sync_skcipher(ctx->enc); out_err_free_mech: kfree(ctx->mech_used.data); out_err: return PTR_ERR(p); } -static struct crypto_skcipher * +static struct crypto_sync_skcipher * context_v2_alloc_cipher(struct krb5_ctx *ctx, const char *cname, u8 *key) { - struct crypto_skcipher *cp; + struct crypto_sync_skcipher *cp; - cp = crypto_alloc_skcipher(cname, 0, CRYPTO_ALG_ASYNC); + cp = crypto_alloc_sync_skcipher(cname, 0, 0); if (IS_ERR(cp)) { dprintk("gss_kerberos_mech: unable to initialize " "crypto algorithm %s\n", cname); return NULL; } - if (crypto_skcipher_setkey(cp, key, ctx->gk5e->keylength)) { + if (crypto_sync_skcipher_setkey(cp, key, ctx->gk5e->keylength)) { dprintk("gss_kerberos_mech: error setting key for " "crypto algorithm %s\n", cname); - crypto_free_skcipher(cp); + crypto_free_sync_skcipher(cp); return NULL; } return cp; @@ -413,9 +412,9 @@ context_derive_keys_des3(struct krb5_ctx *ctx, gfp_t gfp_mask) return 0; out_free_enc: - crypto_free_skcipher(ctx->enc); + crypto_free_sync_skcipher(ctx->enc); out_free_seq: - crypto_free_skcipher(ctx->seq); + crypto_free_sync_skcipher(ctx->seq); out_err: return -EINVAL; } @@ -469,17 +468,15 @@ context_derive_keys_rc4(struct krb5_ctx *ctx) /* * allocate hash, and skciphers for data and seqnum encryption */ - ctx->enc = crypto_alloc_skcipher(ctx->gk5e->encrypt_name, 0, - CRYPTO_ALG_ASYNC); + ctx->enc = crypto_alloc_sync_skcipher(ctx->gk5e->encrypt_name, 0, 0); if (IS_ERR(ctx->enc)) { err = PTR_ERR(ctx->enc); goto out_err_free_hmac; } - ctx->seq = crypto_alloc_skcipher(ctx->gk5e->encrypt_name, 0, - CRYPTO_ALG_ASYNC); + ctx->seq = crypto_alloc_sync_skcipher(ctx->gk5e->encrypt_name, 0, 0); if (IS_ERR(ctx->seq)) { - crypto_free_skcipher(ctx->enc); + crypto_free_sync_skcipher(ctx->enc); err = PTR_ERR(ctx->seq); goto out_err_free_hmac; } @@ -591,7 +588,7 @@ context_derive_keys_new(struct krb5_ctx *ctx, gfp_t gfp_mask) context_v2_alloc_cipher(ctx, "cbc(aes)", ctx->acceptor_seal); if (ctx->acceptor_enc_aux == NULL) { - crypto_free_skcipher(ctx->initiator_enc_aux); + crypto_free_sync_skcipher(ctx->initiator_enc_aux); goto out_free_acceptor_enc; } } @@ -599,9 +596,9 @@ context_derive_keys_new(struct krb5_ctx *ctx, gfp_t gfp_mask) return 0; out_free_acceptor_enc: - crypto_free_skcipher(ctx->acceptor_enc); + crypto_free_sync_skcipher(ctx->acceptor_enc); out_free_initiator_enc: - crypto_free_skcipher(ctx->initiator_enc); + crypto_free_sync_skcipher(ctx->initiator_enc); out_err: return -EINVAL; } @@ -713,12 +710,12 @@ static void gss_delete_sec_context_kerberos(void *internal_ctx) { struct krb5_ctx *kctx = internal_ctx; - crypto_free_skcipher(kctx->seq); - crypto_free_skcipher(kctx->enc); - crypto_free_skcipher(kctx->acceptor_enc); - crypto_free_skcipher(kctx->initiator_enc); - crypto_free_skcipher(kctx->acceptor_enc_aux); - crypto_free_skcipher(kctx->initiator_enc_aux); + crypto_free_sync_skcipher(kctx->seq); + crypto_free_sync_skcipher(kctx->enc); + crypto_free_sync_skcipher(kctx->acceptor_enc); + crypto_free_sync_skcipher(kctx->initiator_enc); + crypto_free_sync_skcipher(kctx->acceptor_enc_aux); + crypto_free_sync_skcipher(kctx->initiator_enc_aux); kfree(kctx->mech_used.data); kfree(kctx); } diff --git a/net/sunrpc/auth_gss/gss_krb5_seqnum.c b/net/sunrpc/auth_gss/gss_krb5_seqnum.c index c8b9082f4a9d..fb6656295204 100644 --- a/net/sunrpc/auth_gss/gss_krb5_seqnum.c +++ b/net/sunrpc/auth_gss/gss_krb5_seqnum.c @@ -43,13 +43,12 @@ static s32 krb5_make_rc4_seq_num(struct krb5_ctx *kctx, int direction, s32 seqnum, unsigned char *cksum, unsigned char *buf) { - struct crypto_skcipher *cipher; + struct crypto_sync_skcipher *cipher; unsigned char plain[8]; s32 code; dprintk("RPC: %s:\n", __func__); - cipher = crypto_alloc_skcipher(kctx->gk5e->encrypt_name, 0, - CRYPTO_ALG_ASYNC); + cipher = crypto_alloc_sync_skcipher(kctx->gk5e->encrypt_name, 0, 0); if (IS_ERR(cipher)) return PTR_ERR(cipher); @@ -68,12 +67,12 @@ krb5_make_rc4_seq_num(struct krb5_ctx *kctx, int direction, s32 seqnum, code = krb5_encrypt(cipher, cksum, plain, buf, 8); out: - crypto_free_skcipher(cipher); + crypto_free_sync_skcipher(cipher); return code; } s32 krb5_make_seq_num(struct krb5_ctx *kctx, - struct crypto_skcipher *key, + struct crypto_sync_skcipher *key, int direction, u32 seqnum, unsigned char *cksum, unsigned char *buf) @@ -101,13 +100,12 @@ static s32 krb5_get_rc4_seq_num(struct krb5_ctx *kctx, unsigned char *cksum, unsigned char *buf, int *direction, s32 *seqnum) { - struct crypto_skcipher *cipher; + struct crypto_sync_skcipher *cipher; unsigned char plain[8]; s32 code; dprintk("RPC: %s:\n", __func__); - cipher = crypto_alloc_skcipher(kctx->gk5e->encrypt_name, 0, - CRYPTO_ALG_ASYNC); + cipher = crypto_alloc_sync_skcipher(kctx->gk5e->encrypt_name, 0, 0); if (IS_ERR(cipher)) return PTR_ERR(cipher); @@ -130,7 +128,7 @@ krb5_get_rc4_seq_num(struct krb5_ctx *kctx, unsigned char *cksum, *seqnum = ((plain[0] << 24) | (plain[1] << 16) | (plain[2] << 8) | (plain[3])); out: - crypto_free_skcipher(cipher); + crypto_free_sync_skcipher(cipher); return code; } @@ -142,7 +140,7 @@ krb5_get_seq_num(struct krb5_ctx *kctx, { s32 code; unsigned char plain[8]; - struct crypto_skcipher *key = kctx->seq; + struct crypto_sync_skcipher *key = kctx->seq; dprintk("RPC: krb5_get_seq_num:\n"); diff --git a/net/sunrpc/auth_gss/gss_krb5_wrap.c b/net/sunrpc/auth_gss/gss_krb5_wrap.c index 39a2e672900b..3d975a4013d2 100644 --- a/net/sunrpc/auth_gss/gss_krb5_wrap.c +++ b/net/sunrpc/auth_gss/gss_krb5_wrap.c @@ -174,7 +174,7 @@ gss_wrap_kerberos_v1(struct krb5_ctx *kctx, int offset, now = get_seconds(); - blocksize = crypto_skcipher_blocksize(kctx->enc); + blocksize = crypto_sync_skcipher_blocksize(kctx->enc); gss_krb5_add_padding(buf, offset, blocksize); BUG_ON((buf->len - offset) % blocksize); plainlen = conflen + buf->len - offset; @@ -239,10 +239,10 @@ gss_wrap_kerberos_v1(struct krb5_ctx *kctx, int offset, return GSS_S_FAILURE; if (kctx->enctype == ENCTYPE_ARCFOUR_HMAC) { - struct crypto_skcipher *cipher; + struct crypto_sync_skcipher *cipher; int err; - cipher = crypto_alloc_skcipher(kctx->gk5e->encrypt_name, 0, - CRYPTO_ALG_ASYNC); + cipher = crypto_alloc_sync_skcipher(kctx->gk5e->encrypt_name, + 0, 0); if (IS_ERR(cipher)) return GSS_S_FAILURE; @@ -250,7 +250,7 @@ gss_wrap_kerberos_v1(struct krb5_ctx *kctx, int offset, err = gss_encrypt_xdr_buf(cipher, buf, offset + headlen - conflen, pages); - crypto_free_skcipher(cipher); + crypto_free_sync_skcipher(cipher); if (err) return GSS_S_FAILURE; } else { @@ -327,18 +327,18 @@ gss_unwrap_kerberos_v1(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf) return GSS_S_BAD_SIG; if (kctx->enctype == ENCTYPE_ARCFOUR_HMAC) { - struct crypto_skcipher *cipher; + struct crypto_sync_skcipher *cipher; int err; - cipher = crypto_alloc_skcipher(kctx->gk5e->encrypt_name, 0, - CRYPTO_ALG_ASYNC); + cipher = crypto_alloc_sync_skcipher(kctx->gk5e->encrypt_name, + 0, 0); if (IS_ERR(cipher)) return GSS_S_FAILURE; krb5_rc4_setup_enc_key(kctx, cipher, seqnum); err = gss_decrypt_xdr_buf(cipher, buf, crypt_offset); - crypto_free_skcipher(cipher); + crypto_free_sync_skcipher(cipher); if (err) return GSS_S_DEFECTIVE_TOKEN; } else { @@ -371,7 +371,7 @@ gss_unwrap_kerberos_v1(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf) /* Copy the data back to the right position. XXX: Would probably be * better to copy and encrypt at the same time. */ - blocksize = crypto_skcipher_blocksize(kctx->enc); + blocksize = crypto_sync_skcipher_blocksize(kctx->enc); data_start = ptr + (GSS_KRB5_TOK_HDR_LEN + kctx->gk5e->cksumlength) + conflen; orig_start = buf->head[0].iov_base + offset; From patchwork Wed Sep 19 02:10:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605173 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AED8813AD for ; Wed, 19 Sep 2018 02:12:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9B2A02BF25 for ; Wed, 19 Sep 2018 02:12:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8D1052BFC2; Wed, 19 Sep 2018 02:12:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3C3B82BF25 for ; Wed, 19 Sep 2018 02:12:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730885AbeISHqj (ORCPT ); Wed, 19 Sep 2018 03:46:39 -0400 Received: from mail-pl1-f196.google.com ([209.85.214.196]:41062 "EHLO mail-pl1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730841AbeISHqi (ORCPT ); Wed, 19 Sep 2018 03:46:38 -0400 Received: by mail-pl1-f196.google.com with SMTP id b12-v6so1858737plr.8 for ; Tue, 18 Sep 2018 19:11:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Ir2PH01Jcc8mpCMXYQzPHQt7ow07J4AJdATFqvNt4A0=; b=iLw6Owh1QUk+fUk9/rFHqIZgqqmzM3fL2D6spe1SNchgdGL3Cj3FC25yjxe9iDAxk6 yB7pZg9pkLi55xwafzw7JBCzXHpiScRZcK49qBrUylgvIGDMpbVwsypY6N5Jw8POg+fS p3V3CoW7c8ObjdP/duwi4CR1xTFGXjzHCR178= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Ir2PH01Jcc8mpCMXYQzPHQt7ow07J4AJdATFqvNt4A0=; b=F8zzpriFn2xRiSns0ZCh0zq2eeKkRVwoCZzV2P92MxYdBrlR26pVnDTGTkg+5NWhm9 6tVInjjnT2hsT96KkB2MVQ21mubg1za4XnxL3dTHIMKgAoRUhqojYS6bXJF4K+bkDX0E gmBl1zJSm/rGITO01vbqqC3qtIQszzDzuhHe9tP7gLgXmpWcerUfU1uu9PfAtGQKwpGF q51fjWJQsWFeWwJE77VFTRzL9x9dfwDQeP434qIUEZ6CAismLYOQQV79o/aePJHqZo73 ruF46bh/lAfOeHjpXzWLpV7pVGCpj6AxGgMYv7GnBeAdCyW1iSliazNChgvnWUG9iGRA ynQg== X-Gm-Message-State: APzg51DRVdIY1PnKLthdUJZ5RuSSNLqOZ8A8ddKawTZVcK+T9y24IS4W myG8pVu5gQg0WoY6Me3OSkDJdQ== X-Google-Smtp-Source: ANB0VdZ7W96jpdXbNnqSqSM4T8JEnwP/zkhfQnBsHo6JB6Av0ctGjpZHM4gj2j2u4bQ6L7KXF18SuQ== X-Received: by 2002:a17:902:3a2:: with SMTP id d31-v6mr31865181pld.287.1537323069270; Tue, 18 Sep 2018 19:11:09 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id v26-v6sm26287625pfe.57.2018.09.18.19.11.04 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:11:05 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , Johannes Berg , linux-wireless@vger.kernel.org, Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 03/23] lib80211: Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:40 -0700 Message-Id: <20180919021100.3380-4-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: Johannes Berg Cc: linux-wireless@vger.kernel.org Signed-off-by: Kees Cook --- drivers/staging/rtl8192e/rtllib_crypt_tkip.c | 34 +++++++++---------- drivers/staging/rtl8192e/rtllib_crypt_wep.c | 28 +++++++-------- .../rtl8192u/ieee80211/ieee80211_crypt_tkip.c | 34 +++++++++---------- .../rtl8192u/ieee80211/ieee80211_crypt_wep.c | 26 +++++++------- net/wireless/lib80211_crypt_tkip.c | 34 +++++++++---------- net/wireless/lib80211_crypt_wep.c | 28 +++++++-------- 6 files changed, 89 insertions(+), 95 deletions(-) diff --git a/drivers/staging/rtl8192e/rtllib_crypt_tkip.c b/drivers/staging/rtl8192e/rtllib_crypt_tkip.c index 9f18be14dda6..f38f1f74fcd6 100644 --- a/drivers/staging/rtl8192e/rtllib_crypt_tkip.c +++ b/drivers/staging/rtl8192e/rtllib_crypt_tkip.c @@ -49,9 +49,9 @@ struct rtllib_tkip_data { u32 dot11RSNAStatsTKIPLocalMICFailures; int key_idx; - struct crypto_skcipher *rx_tfm_arc4; + struct crypto_sync_skcipher *rx_tfm_arc4; struct crypto_shash *rx_tfm_michael; - struct crypto_skcipher *tx_tfm_arc4; + struct crypto_sync_skcipher *tx_tfm_arc4; struct crypto_shash *tx_tfm_michael; /* scratch buffers for virt_to_page() (crypto API) */ u8 rx_hdr[16]; @@ -66,8 +66,7 @@ static void *rtllib_tkip_init(int key_idx) if (priv == NULL) goto fail; priv->key_idx = key_idx; - priv->tx_tfm_arc4 = crypto_alloc_skcipher("ecb(arc4)", 0, - CRYPTO_ALG_ASYNC); + priv->tx_tfm_arc4 = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0); if (IS_ERR(priv->tx_tfm_arc4)) { pr_debug("Could not allocate crypto API arc4\n"); priv->tx_tfm_arc4 = NULL; @@ -81,8 +80,7 @@ static void *rtllib_tkip_init(int key_idx) goto fail; } - priv->rx_tfm_arc4 = crypto_alloc_skcipher("ecb(arc4)", 0, - CRYPTO_ALG_ASYNC); + priv->rx_tfm_arc4 = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0); if (IS_ERR(priv->rx_tfm_arc4)) { pr_debug("Could not allocate crypto API arc4\n"); priv->rx_tfm_arc4 = NULL; @@ -100,9 +98,9 @@ static void *rtllib_tkip_init(int key_idx) fail: if (priv) { crypto_free_shash(priv->tx_tfm_michael); - crypto_free_skcipher(priv->tx_tfm_arc4); + crypto_free_sync_skcipher(priv->tx_tfm_arc4); crypto_free_shash(priv->rx_tfm_michael); - crypto_free_skcipher(priv->rx_tfm_arc4); + crypto_free_sync_skcipher(priv->rx_tfm_arc4); kfree(priv); } @@ -116,9 +114,9 @@ static void rtllib_tkip_deinit(void *priv) if (_priv) { crypto_free_shash(_priv->tx_tfm_michael); - crypto_free_skcipher(_priv->tx_tfm_arc4); + crypto_free_sync_skcipher(_priv->tx_tfm_arc4); crypto_free_shash(_priv->rx_tfm_michael); - crypto_free_skcipher(_priv->rx_tfm_arc4); + crypto_free_sync_skcipher(_priv->rx_tfm_arc4); } kfree(priv); } @@ -337,7 +335,7 @@ static int rtllib_tkip_encrypt(struct sk_buff *skb, int hdr_len, void *priv) *pos++ = (tkey->tx_iv32 >> 24) & 0xff; if (!tcb_desc->bHwSec) { - SKCIPHER_REQUEST_ON_STACK(req, tkey->tx_tfm_arc4); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, tkey->tx_tfm_arc4); icv = skb_put(skb, 4); crc = ~crc32_le(~0, pos, len); @@ -349,8 +347,8 @@ static int rtllib_tkip_encrypt(struct sk_buff *skb, int hdr_len, void *priv) sg_init_one(&sg, pos, len+4); - crypto_skcipher_setkey(tkey->tx_tfm_arc4, rc4key, 16); - skcipher_request_set_tfm(req, tkey->tx_tfm_arc4); + crypto_sync_skcipher_setkey(tkey->tx_tfm_arc4, rc4key, 16); + skcipher_request_set_sync_tfm(req, tkey->tx_tfm_arc4); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, &sg, &sg, len + 4, NULL); ret = crypto_skcipher_encrypt(req); @@ -420,7 +418,7 @@ static int rtllib_tkip_decrypt(struct sk_buff *skb, int hdr_len, void *priv) pos += 8; if (!tcb_desc->bHwSec || (skb->cb[0] == 1)) { - SKCIPHER_REQUEST_ON_STACK(req, tkey->rx_tfm_arc4); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, tkey->rx_tfm_arc4); if ((iv32 < tkey->rx_iv32 || (iv32 == tkey->rx_iv32 && iv16 <= tkey->rx_iv16)) && @@ -447,8 +445,8 @@ static int rtllib_tkip_decrypt(struct sk_buff *skb, int hdr_len, void *priv) sg_init_one(&sg, pos, plen+4); - crypto_skcipher_setkey(tkey->rx_tfm_arc4, rc4key, 16); - skcipher_request_set_tfm(req, tkey->rx_tfm_arc4); + crypto_sync_skcipher_setkey(tkey->rx_tfm_arc4, rc4key, 16); + skcipher_request_set_sync_tfm(req, tkey->rx_tfm_arc4); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, &sg, &sg, plen + 4, NULL); err = crypto_skcipher_decrypt(req); @@ -664,9 +662,9 @@ static int rtllib_tkip_set_key(void *key, int len, u8 *seq, void *priv) struct rtllib_tkip_data *tkey = priv; int keyidx; struct crypto_shash *tfm = tkey->tx_tfm_michael; - struct crypto_skcipher *tfm2 = tkey->tx_tfm_arc4; + struct crypto_sync_skcipher *tfm2 = tkey->tx_tfm_arc4; struct crypto_shash *tfm3 = tkey->rx_tfm_michael; - struct crypto_skcipher *tfm4 = tkey->rx_tfm_arc4; + struct crypto_sync_skcipher *tfm4 = tkey->rx_tfm_arc4; keyidx = tkey->key_idx; memset(tkey, 0, sizeof(*tkey)); diff --git a/drivers/staging/rtl8192e/rtllib_crypt_wep.c b/drivers/staging/rtl8192e/rtllib_crypt_wep.c index b3343a5d0fd6..d11ec39171d5 100644 --- a/drivers/staging/rtl8192e/rtllib_crypt_wep.c +++ b/drivers/staging/rtl8192e/rtllib_crypt_wep.c @@ -27,8 +27,8 @@ struct prism2_wep_data { u8 key[WEP_KEY_LEN + 1]; u8 key_len; u8 key_idx; - struct crypto_skcipher *tx_tfm; - struct crypto_skcipher *rx_tfm; + struct crypto_sync_skcipher *tx_tfm; + struct crypto_sync_skcipher *rx_tfm; }; @@ -41,13 +41,13 @@ static void *prism2_wep_init(int keyidx) goto fail; priv->key_idx = keyidx; - priv->tx_tfm = crypto_alloc_skcipher("ecb(arc4)", 0, CRYPTO_ALG_ASYNC); + priv->tx_tfm = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0); if (IS_ERR(priv->tx_tfm)) { pr_debug("rtllib_crypt_wep: could not allocate crypto API arc4\n"); priv->tx_tfm = NULL; goto fail; } - priv->rx_tfm = crypto_alloc_skcipher("ecb(arc4)", 0, CRYPTO_ALG_ASYNC); + priv->rx_tfm = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0); if (IS_ERR(priv->rx_tfm)) { pr_debug("rtllib_crypt_wep: could not allocate crypto API arc4\n"); priv->rx_tfm = NULL; @@ -61,8 +61,8 @@ static void *prism2_wep_init(int keyidx) fail: if (priv) { - crypto_free_skcipher(priv->tx_tfm); - crypto_free_skcipher(priv->rx_tfm); + crypto_free_sync_skcipher(priv->tx_tfm); + crypto_free_sync_skcipher(priv->rx_tfm); kfree(priv); } return NULL; @@ -74,8 +74,8 @@ static void prism2_wep_deinit(void *priv) struct prism2_wep_data *_priv = priv; if (_priv) { - crypto_free_skcipher(_priv->tx_tfm); - crypto_free_skcipher(_priv->rx_tfm); + crypto_free_sync_skcipher(_priv->tx_tfm); + crypto_free_sync_skcipher(_priv->rx_tfm); } kfree(priv); } @@ -135,7 +135,7 @@ static int prism2_wep_encrypt(struct sk_buff *skb, int hdr_len, void *priv) memcpy(key + 3, wep->key, wep->key_len); if (!tcb_desc->bHwSec) { - SKCIPHER_REQUEST_ON_STACK(req, wep->tx_tfm); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, wep->tx_tfm); /* Append little-endian CRC32 and encrypt it to produce ICV */ crc = ~crc32_le(~0, pos, len); @@ -146,8 +146,8 @@ static int prism2_wep_encrypt(struct sk_buff *skb, int hdr_len, void *priv) icv[3] = crc >> 24; sg_init_one(&sg, pos, len+4); - crypto_skcipher_setkey(wep->tx_tfm, key, klen); - skcipher_request_set_tfm(req, wep->tx_tfm); + crypto_sync_skcipher_setkey(wep->tx_tfm, key, klen); + skcipher_request_set_sync_tfm(req, wep->tx_tfm); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, &sg, &sg, len + 4, NULL); err = crypto_skcipher_encrypt(req); @@ -199,11 +199,11 @@ static int prism2_wep_decrypt(struct sk_buff *skb, int hdr_len, void *priv) plen = skb->len - hdr_len - 8; if (!tcb_desc->bHwSec) { - SKCIPHER_REQUEST_ON_STACK(req, wep->rx_tfm); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, wep->rx_tfm); sg_init_one(&sg, pos, plen+4); - crypto_skcipher_setkey(wep->rx_tfm, key, klen); - skcipher_request_set_tfm(req, wep->rx_tfm); + crypto_sync_skcipher_setkey(wep->rx_tfm, key, klen); + skcipher_request_set_sync_tfm(req, wep->rx_tfm); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, &sg, &sg, plen + 4, NULL); err = crypto_skcipher_decrypt(req); diff --git a/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_tkip.c b/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_tkip.c index 1088fa0aee0e..829fa4bd253c 100644 --- a/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_tkip.c +++ b/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_tkip.c @@ -53,9 +53,9 @@ struct ieee80211_tkip_data { int key_idx; - struct crypto_skcipher *rx_tfm_arc4; + struct crypto_sync_skcipher *rx_tfm_arc4; struct crypto_shash *rx_tfm_michael; - struct crypto_skcipher *tx_tfm_arc4; + struct crypto_sync_skcipher *tx_tfm_arc4; struct crypto_shash *tx_tfm_michael; /* scratch buffers for virt_to_page() (crypto API) */ @@ -71,8 +71,7 @@ static void *ieee80211_tkip_init(int key_idx) goto fail; priv->key_idx = key_idx; - priv->tx_tfm_arc4 = crypto_alloc_skcipher("ecb(arc4)", 0, - CRYPTO_ALG_ASYNC); + priv->tx_tfm_arc4 = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0); if (IS_ERR(priv->tx_tfm_arc4)) { printk(KERN_DEBUG "ieee80211_crypt_tkip: could not allocate " "crypto API arc4\n"); @@ -88,8 +87,7 @@ static void *ieee80211_tkip_init(int key_idx) goto fail; } - priv->rx_tfm_arc4 = crypto_alloc_skcipher("ecb(arc4)", 0, - CRYPTO_ALG_ASYNC); + priv->rx_tfm_arc4 = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0); if (IS_ERR(priv->rx_tfm_arc4)) { printk(KERN_DEBUG "ieee80211_crypt_tkip: could not allocate " "crypto API arc4\n"); @@ -110,9 +108,9 @@ static void *ieee80211_tkip_init(int key_idx) fail: if (priv) { crypto_free_shash(priv->tx_tfm_michael); - crypto_free_skcipher(priv->tx_tfm_arc4); + crypto_free_sync_skcipher(priv->tx_tfm_arc4); crypto_free_shash(priv->rx_tfm_michael); - crypto_free_skcipher(priv->rx_tfm_arc4); + crypto_free_sync_skcipher(priv->rx_tfm_arc4); kfree(priv); } @@ -126,9 +124,9 @@ static void ieee80211_tkip_deinit(void *priv) if (_priv) { crypto_free_shash(_priv->tx_tfm_michael); - crypto_free_skcipher(_priv->tx_tfm_arc4); + crypto_free_sync_skcipher(_priv->tx_tfm_arc4); crypto_free_shash(_priv->rx_tfm_michael); - crypto_free_skcipher(_priv->rx_tfm_arc4); + crypto_free_sync_skcipher(_priv->rx_tfm_arc4); } kfree(priv); } @@ -340,7 +338,7 @@ static int ieee80211_tkip_encrypt(struct sk_buff *skb, int hdr_len, void *priv) *pos++ = (tkey->tx_iv32 >> 24) & 0xff; if (!tcb_desc->bHwSec) { - SKCIPHER_REQUEST_ON_STACK(req, tkey->tx_tfm_arc4); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, tkey->tx_tfm_arc4); icv = skb_put(skb, 4); crc = ~crc32_le(~0, pos, len); @@ -348,9 +346,9 @@ static int ieee80211_tkip_encrypt(struct sk_buff *skb, int hdr_len, void *priv) icv[1] = crc >> 8; icv[2] = crc >> 16; icv[3] = crc >> 24; - crypto_skcipher_setkey(tkey->tx_tfm_arc4, rc4key, 16); + crypto_sync_skcipher_setkey(tkey->tx_tfm_arc4, rc4key, 16); sg_init_one(&sg, pos, len+4); - skcipher_request_set_tfm(req, tkey->tx_tfm_arc4); + skcipher_request_set_sync_tfm(req, tkey->tx_tfm_arc4); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, &sg, &sg, len + 4, NULL); ret = crypto_skcipher_encrypt(req); @@ -418,7 +416,7 @@ static int ieee80211_tkip_decrypt(struct sk_buff *skb, int hdr_len, void *priv) pos += 8; if (!tcb_desc->bHwSec) { - SKCIPHER_REQUEST_ON_STACK(req, tkey->rx_tfm_arc4); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, tkey->rx_tfm_arc4); if (iv32 < tkey->rx_iv32 || (iv32 == tkey->rx_iv32 && iv16 <= tkey->rx_iv16)) { @@ -440,10 +438,10 @@ static int ieee80211_tkip_decrypt(struct sk_buff *skb, int hdr_len, void *priv) plen = skb->len - hdr_len - 12; - crypto_skcipher_setkey(tkey->rx_tfm_arc4, rc4key, 16); + crypto_sync_skcipher_setkey(tkey->rx_tfm_arc4, rc4key, 16); sg_init_one(&sg, pos, plen+4); - skcipher_request_set_tfm(req, tkey->rx_tfm_arc4); + skcipher_request_set_sync_tfm(req, tkey->rx_tfm_arc4); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, &sg, &sg, plen + 4, NULL); @@ -663,9 +661,9 @@ static int ieee80211_tkip_set_key(void *key, int len, u8 *seq, void *priv) struct ieee80211_tkip_data *tkey = priv; int keyidx; struct crypto_shash *tfm = tkey->tx_tfm_michael; - struct crypto_skcipher *tfm2 = tkey->tx_tfm_arc4; + struct crypto_sync_skcipher *tfm2 = tkey->tx_tfm_arc4; struct crypto_shash *tfm3 = tkey->rx_tfm_michael; - struct crypto_skcipher *tfm4 = tkey->rx_tfm_arc4; + struct crypto_sync_skcipher *tfm4 = tkey->rx_tfm_arc4; keyidx = tkey->key_idx; memset(tkey, 0, sizeof(*tkey)); diff --git a/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_wep.c b/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_wep.c index b9f86be9e52b..d4a1bf0caa7a 100644 --- a/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_wep.c +++ b/drivers/staging/rtl8192u/ieee80211/ieee80211_crypt_wep.c @@ -32,8 +32,8 @@ struct prism2_wep_data { u8 key[WEP_KEY_LEN + 1]; u8 key_len; u8 key_idx; - struct crypto_skcipher *tx_tfm; - struct crypto_skcipher *rx_tfm; + struct crypto_sync_skcipher *tx_tfm; + struct crypto_sync_skcipher *rx_tfm; }; @@ -46,10 +46,10 @@ static void *prism2_wep_init(int keyidx) return NULL; priv->key_idx = keyidx; - priv->tx_tfm = crypto_alloc_skcipher("ecb(arc4)", 0, CRYPTO_ALG_ASYNC); + priv->tx_tfm = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0); if (IS_ERR(priv->tx_tfm)) goto free_priv; - priv->rx_tfm = crypto_alloc_skcipher("ecb(arc4)", 0, CRYPTO_ALG_ASYNC); + priv->rx_tfm = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0); if (IS_ERR(priv->rx_tfm)) goto free_tx; @@ -58,7 +58,7 @@ static void *prism2_wep_init(int keyidx) return priv; free_tx: - crypto_free_skcipher(priv->tx_tfm); + crypto_free_sync_skcipher(priv->tx_tfm); free_priv: kfree(priv); return NULL; @@ -70,8 +70,8 @@ static void prism2_wep_deinit(void *priv) struct prism2_wep_data *_priv = priv; if (_priv) { - crypto_free_skcipher(_priv->tx_tfm); - crypto_free_skcipher(_priv->rx_tfm); + crypto_free_sync_skcipher(_priv->tx_tfm); + crypto_free_sync_skcipher(_priv->rx_tfm); } kfree(priv); } @@ -128,7 +128,7 @@ static int prism2_wep_encrypt(struct sk_buff *skb, int hdr_len, void *priv) memcpy(key + 3, wep->key, wep->key_len); if (!tcb_desc->bHwSec) { - SKCIPHER_REQUEST_ON_STACK(req, wep->tx_tfm); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, wep->tx_tfm); /* Append little-endian CRC32 and encrypt it to produce ICV */ crc = ~crc32_le(~0, pos, len); @@ -138,10 +138,10 @@ static int prism2_wep_encrypt(struct sk_buff *skb, int hdr_len, void *priv) icv[2] = crc >> 16; icv[3] = crc >> 24; - crypto_skcipher_setkey(wep->tx_tfm, key, klen); + crypto_sync_skcipher_setkey(wep->tx_tfm, key, klen); sg_init_one(&sg, pos, len+4); - skcipher_request_set_tfm(req, wep->tx_tfm); + skcipher_request_set_sync_tfm(req, wep->tx_tfm); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, &sg, &sg, len + 4, NULL); @@ -193,12 +193,12 @@ static int prism2_wep_decrypt(struct sk_buff *skb, int hdr_len, void *priv) plen = skb->len - hdr_len - 8; if (!tcb_desc->bHwSec) { - SKCIPHER_REQUEST_ON_STACK(req, wep->rx_tfm); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, wep->rx_tfm); - crypto_skcipher_setkey(wep->rx_tfm, key, klen); + crypto_sync_skcipher_setkey(wep->rx_tfm, key, klen); sg_init_one(&sg, pos, plen+4); - skcipher_request_set_tfm(req, wep->rx_tfm); + skcipher_request_set_sync_tfm(req, wep->rx_tfm); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, &sg, &sg, plen + 4, NULL); diff --git a/net/wireless/lib80211_crypt_tkip.c b/net/wireless/lib80211_crypt_tkip.c index e6bce1f130c9..346e19cbdf59 100644 --- a/net/wireless/lib80211_crypt_tkip.c +++ b/net/wireless/lib80211_crypt_tkip.c @@ -64,9 +64,9 @@ struct lib80211_tkip_data { int key_idx; - struct crypto_skcipher *rx_tfm_arc4; + struct crypto_sync_skcipher *rx_tfm_arc4; struct crypto_shash *rx_tfm_michael; - struct crypto_skcipher *tx_tfm_arc4; + struct crypto_sync_skcipher *tx_tfm_arc4; struct crypto_shash *tx_tfm_michael; /* scratch buffers for virt_to_page() (crypto API) */ @@ -99,8 +99,7 @@ static void *lib80211_tkip_init(int key_idx) priv->key_idx = key_idx; - priv->tx_tfm_arc4 = crypto_alloc_skcipher("ecb(arc4)", 0, - CRYPTO_ALG_ASYNC); + priv->tx_tfm_arc4 = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0); if (IS_ERR(priv->tx_tfm_arc4)) { priv->tx_tfm_arc4 = NULL; goto fail; @@ -112,8 +111,7 @@ static void *lib80211_tkip_init(int key_idx) goto fail; } - priv->rx_tfm_arc4 = crypto_alloc_skcipher("ecb(arc4)", 0, - CRYPTO_ALG_ASYNC); + priv->rx_tfm_arc4 = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0); if (IS_ERR(priv->rx_tfm_arc4)) { priv->rx_tfm_arc4 = NULL; goto fail; @@ -130,9 +128,9 @@ static void *lib80211_tkip_init(int key_idx) fail: if (priv) { crypto_free_shash(priv->tx_tfm_michael); - crypto_free_skcipher(priv->tx_tfm_arc4); + crypto_free_sync_skcipher(priv->tx_tfm_arc4); crypto_free_shash(priv->rx_tfm_michael); - crypto_free_skcipher(priv->rx_tfm_arc4); + crypto_free_sync_skcipher(priv->rx_tfm_arc4); kfree(priv); } @@ -144,9 +142,9 @@ static void lib80211_tkip_deinit(void *priv) struct lib80211_tkip_data *_priv = priv; if (_priv) { crypto_free_shash(_priv->tx_tfm_michael); - crypto_free_skcipher(_priv->tx_tfm_arc4); + crypto_free_sync_skcipher(_priv->tx_tfm_arc4); crypto_free_shash(_priv->rx_tfm_michael); - crypto_free_skcipher(_priv->rx_tfm_arc4); + crypto_free_sync_skcipher(_priv->rx_tfm_arc4); } kfree(priv); } @@ -344,7 +342,7 @@ static int lib80211_tkip_hdr(struct sk_buff *skb, int hdr_len, static int lib80211_tkip_encrypt(struct sk_buff *skb, int hdr_len, void *priv) { struct lib80211_tkip_data *tkey = priv; - SKCIPHER_REQUEST_ON_STACK(req, tkey->tx_tfm_arc4); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, tkey->tx_tfm_arc4); int len; u8 rc4key[16], *pos, *icv; u32 crc; @@ -374,9 +372,9 @@ static int lib80211_tkip_encrypt(struct sk_buff *skb, int hdr_len, void *priv) icv[2] = crc >> 16; icv[3] = crc >> 24; - crypto_skcipher_setkey(tkey->tx_tfm_arc4, rc4key, 16); + crypto_sync_skcipher_setkey(tkey->tx_tfm_arc4, rc4key, 16); sg_init_one(&sg, pos, len + 4); - skcipher_request_set_tfm(req, tkey->tx_tfm_arc4); + skcipher_request_set_sync_tfm(req, tkey->tx_tfm_arc4); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, &sg, &sg, len + 4, NULL); err = crypto_skcipher_encrypt(req); @@ -400,7 +398,7 @@ static inline int tkip_replay_check(u32 iv32_n, u16 iv16_n, static int lib80211_tkip_decrypt(struct sk_buff *skb, int hdr_len, void *priv) { struct lib80211_tkip_data *tkey = priv; - SKCIPHER_REQUEST_ON_STACK(req, tkey->rx_tfm_arc4); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, tkey->rx_tfm_arc4); u8 rc4key[16]; u8 keyidx, *pos; u32 iv32; @@ -463,9 +461,9 @@ static int lib80211_tkip_decrypt(struct sk_buff *skb, int hdr_len, void *priv) plen = skb->len - hdr_len - 12; - crypto_skcipher_setkey(tkey->rx_tfm_arc4, rc4key, 16); + crypto_sync_skcipher_setkey(tkey->rx_tfm_arc4, rc4key, 16); sg_init_one(&sg, pos, plen + 4); - skcipher_request_set_tfm(req, tkey->rx_tfm_arc4); + skcipher_request_set_sync_tfm(req, tkey->rx_tfm_arc4); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, &sg, &sg, plen + 4, NULL); err = crypto_skcipher_decrypt(req); @@ -660,9 +658,9 @@ static int lib80211_tkip_set_key(void *key, int len, u8 * seq, void *priv) struct lib80211_tkip_data *tkey = priv; int keyidx; struct crypto_shash *tfm = tkey->tx_tfm_michael; - struct crypto_skcipher *tfm2 = tkey->tx_tfm_arc4; + struct crypto_sync_skcipher *tfm2 = tkey->tx_tfm_arc4; struct crypto_shash *tfm3 = tkey->rx_tfm_michael; - struct crypto_skcipher *tfm4 = tkey->rx_tfm_arc4; + struct crypto_sync_skcipher *tfm4 = tkey->rx_tfm_arc4; keyidx = tkey->key_idx; memset(tkey, 0, sizeof(*tkey)); diff --git a/net/wireless/lib80211_crypt_wep.c b/net/wireless/lib80211_crypt_wep.c index d05f58b0fd04..bdadee497f57 100644 --- a/net/wireless/lib80211_crypt_wep.c +++ b/net/wireless/lib80211_crypt_wep.c @@ -35,8 +35,8 @@ struct lib80211_wep_data { u8 key[WEP_KEY_LEN + 1]; u8 key_len; u8 key_idx; - struct crypto_skcipher *tx_tfm; - struct crypto_skcipher *rx_tfm; + struct crypto_sync_skcipher *tx_tfm; + struct crypto_sync_skcipher *rx_tfm; }; static void *lib80211_wep_init(int keyidx) @@ -48,13 +48,13 @@ static void *lib80211_wep_init(int keyidx) goto fail; priv->key_idx = keyidx; - priv->tx_tfm = crypto_alloc_skcipher("ecb(arc4)", 0, CRYPTO_ALG_ASYNC); + priv->tx_tfm = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0); if (IS_ERR(priv->tx_tfm)) { priv->tx_tfm = NULL; goto fail; } - priv->rx_tfm = crypto_alloc_skcipher("ecb(arc4)", 0, CRYPTO_ALG_ASYNC); + priv->rx_tfm = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0); if (IS_ERR(priv->rx_tfm)) { priv->rx_tfm = NULL; goto fail; @@ -66,8 +66,8 @@ static void *lib80211_wep_init(int keyidx) fail: if (priv) { - crypto_free_skcipher(priv->tx_tfm); - crypto_free_skcipher(priv->rx_tfm); + crypto_free_sync_skcipher(priv->tx_tfm); + crypto_free_sync_skcipher(priv->rx_tfm); kfree(priv); } return NULL; @@ -77,8 +77,8 @@ static void lib80211_wep_deinit(void *priv) { struct lib80211_wep_data *_priv = priv; if (_priv) { - crypto_free_skcipher(_priv->tx_tfm); - crypto_free_skcipher(_priv->rx_tfm); + crypto_free_sync_skcipher(_priv->tx_tfm); + crypto_free_sync_skcipher(_priv->rx_tfm); } kfree(priv); } @@ -129,7 +129,7 @@ static int lib80211_wep_build_iv(struct sk_buff *skb, int hdr_len, static int lib80211_wep_encrypt(struct sk_buff *skb, int hdr_len, void *priv) { struct lib80211_wep_data *wep = priv; - SKCIPHER_REQUEST_ON_STACK(req, wep->tx_tfm); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, wep->tx_tfm); u32 crc, klen, len; u8 *pos, *icv; struct scatterlist sg; @@ -162,9 +162,9 @@ static int lib80211_wep_encrypt(struct sk_buff *skb, int hdr_len, void *priv) icv[2] = crc >> 16; icv[3] = crc >> 24; - crypto_skcipher_setkey(wep->tx_tfm, key, klen); + crypto_sync_skcipher_setkey(wep->tx_tfm, key, klen); sg_init_one(&sg, pos, len + 4); - skcipher_request_set_tfm(req, wep->tx_tfm); + skcipher_request_set_sync_tfm(req, wep->tx_tfm); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, &sg, &sg, len + 4, NULL); err = crypto_skcipher_encrypt(req); @@ -182,7 +182,7 @@ static int lib80211_wep_encrypt(struct sk_buff *skb, int hdr_len, void *priv) static int lib80211_wep_decrypt(struct sk_buff *skb, int hdr_len, void *priv) { struct lib80211_wep_data *wep = priv; - SKCIPHER_REQUEST_ON_STACK(req, wep->rx_tfm); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, wep->rx_tfm); u32 crc, klen, plen; u8 key[WEP_KEY_LEN + 3]; u8 keyidx, *pos, icv[4]; @@ -208,9 +208,9 @@ static int lib80211_wep_decrypt(struct sk_buff *skb, int hdr_len, void *priv) /* Apply RC4 to data and compute CRC32 over decrypted data */ plen = skb->len - hdr_len - 8; - crypto_skcipher_setkey(wep->rx_tfm, key, klen); + crypto_sync_skcipher_setkey(wep->rx_tfm, key, klen); sg_init_one(&sg, pos, plen + 4); - skcipher_request_set_tfm(req, wep->rx_tfm); + skcipher_request_set_sync_tfm(req, wep->rx_tfm); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, &sg, &sg, plen + 4, NULL); err = crypto_skcipher_decrypt(req); From patchwork Wed Sep 19 02:10:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605181 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CCDAD6CB for ; Wed, 19 Sep 2018 02:13:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BC8A528814 for ; Wed, 19 Sep 2018 02:13:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B12A828848; Wed, 19 Sep 2018 02:13:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5758E28814 for ; Wed, 19 Sep 2018 02:13:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730846AbeISHqh (ORCPT ); Wed, 19 Sep 2018 03:46:37 -0400 Received: from mail-pg1-f196.google.com ([209.85.215.196]:33472 "EHLO mail-pg1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730831AbeISHqg (ORCPT ); Wed, 19 Sep 2018 03:46:36 -0400 Received: by mail-pg1-f196.google.com with SMTP id s7-v6so1951259pgc.0 for ; Tue, 18 Sep 2018 19:11:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=jrF8nV3VTb4xUBtXcPPmVV0+A80GarPW4EVJzK2YYBE=; b=VRG0Tin3A6hlvBN9P8akr2iyKytS49u9Tmznt5hGIQNbuUQ4t+SjlrcHsDm+FhHFA6 l25wGI3JlIjfWLkNBd3iVW+eLt3eh/bAuMQu/T0aPpVOWNmLljxMAkVmsFSB+ew/+/Yh j6xdIhfEU8svuGdr3BEmILorJGSHP45hwihj8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=jrF8nV3VTb4xUBtXcPPmVV0+A80GarPW4EVJzK2YYBE=; b=SB+AsRWgsjHrwuQwPZbpnK4OLwghAk+U9sYPQxKNamE+W5O/HCCom7asoC6cjl7bJX CLE1zBdzwPO32RjtDPdihPpgbZySzEIrFk8pvyUjlxDiRPbBo+hwEFuAJ6J+DWQtUD19 yJyT55/yRvcdzDIr6o4JC35baXnfhVQFQ6MmkCPXh5b4TLPayqMCKC4JbgaE1mT9g2bY OWDcgjcZ6IRvIXwkiws0cQCwiW93i5zx1/ZpzFZCUtX5XljVAtLoR7b00lKVBFSqLzq6 BSl7TVwN2fqXyqCTz16pJ6QLmrkGsTN3/EKumsdAy24X7iWGN7oc47hvyQjNC0CjNweE EdJA== X-Gm-Message-State: APzg51DfN1XpOP1HEcrTO208CkTR2cu5VxxPTtD88VGkvxhs+XjOSq4S DLtbFW6kfK+ZL6uSVOs9zu2LUQ== X-Google-Smtp-Source: ANB0VdYJSFlprsXPRzu8VXhLB1BbQ1LC3NBLBEL+ZpH0PdHzaRPy0OWsdSHZwA3qv950xxR6zyyjxA== X-Received: by 2002:a63:4c54:: with SMTP id m20-v6mr27644198pgl.292.1537323068385; Tue, 18 Sep 2018 19:11:08 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id a15-v6sm31214326pfe.32.2018.09.18.19.11.04 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:11:05 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , Alexander Aring , Stefan Schmidt , linux-wpan@vger.kernel.org, Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 04/23] mac802154: Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:41 -0700 Message-Id: <20180919021100.3380-5-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: Alexander Aring Cc: Stefan Schmidt Cc: linux-wpan@vger.kernel.org Signed-off-by: Kees Cook --- net/mac802154/llsec.c | 16 ++++++++-------- net/mac802154/llsec.h | 2 +- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/net/mac802154/llsec.c b/net/mac802154/llsec.c index 2fb703d70803..7e29f88dbf6a 100644 --- a/net/mac802154/llsec.c +++ b/net/mac802154/llsec.c @@ -146,18 +146,18 @@ llsec_key_alloc(const struct ieee802154_llsec_key *template) goto err_tfm; } - key->tfm0 = crypto_alloc_skcipher("ctr(aes)", 0, CRYPTO_ALG_ASYNC); + key->tfm0 = crypto_alloc_sync_skcipher("ctr(aes)", 0, 0); if (IS_ERR(key->tfm0)) goto err_tfm; - if (crypto_skcipher_setkey(key->tfm0, template->key, + if (crypto_sync_skcipher_setkey(key->tfm0, template->key, IEEE802154_LLSEC_KEY_SIZE)) goto err_tfm0; return key; err_tfm0: - crypto_free_skcipher(key->tfm0); + crypto_free_sync_skcipher(key->tfm0); err_tfm: for (i = 0; i < ARRAY_SIZE(key->tfm); i++) if (key->tfm[i]) @@ -177,7 +177,7 @@ static void llsec_key_release(struct kref *ref) for (i = 0; i < ARRAY_SIZE(key->tfm); i++) crypto_free_aead(key->tfm[i]); - crypto_free_skcipher(key->tfm0); + crypto_free_sync_skcipher(key->tfm0); kzfree(key); } @@ -622,7 +622,7 @@ llsec_do_encrypt_unauth(struct sk_buff *skb, const struct mac802154_llsec *sec, { u8 iv[16]; struct scatterlist src; - SKCIPHER_REQUEST_ON_STACK(req, key->tfm0); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, key->tfm0); int err, datalen; unsigned char *data; @@ -632,7 +632,7 @@ llsec_do_encrypt_unauth(struct sk_buff *skb, const struct mac802154_llsec *sec, datalen = skb_tail_pointer(skb) - data; sg_init_one(&src, data, datalen); - skcipher_request_set_tfm(req, key->tfm0); + skcipher_request_set_sync_tfm(req, key->tfm0); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, &src, &src, datalen, iv); err = crypto_skcipher_encrypt(req); @@ -840,7 +840,7 @@ llsec_do_decrypt_unauth(struct sk_buff *skb, const struct mac802154_llsec *sec, unsigned char *data; int datalen; struct scatterlist src; - SKCIPHER_REQUEST_ON_STACK(req, key->tfm0); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, key->tfm0); int err; llsec_geniv(iv, dev_addr, &hdr->sec); @@ -849,7 +849,7 @@ llsec_do_decrypt_unauth(struct sk_buff *skb, const struct mac802154_llsec *sec, sg_init_one(&src, data, datalen); - skcipher_request_set_tfm(req, key->tfm0); + skcipher_request_set_sync_tfm(req, key->tfm0); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, &src, &src, datalen, iv); diff --git a/net/mac802154/llsec.h b/net/mac802154/llsec.h index 6f3b658e3279..8be46d74dc39 100644 --- a/net/mac802154/llsec.h +++ b/net/mac802154/llsec.h @@ -29,7 +29,7 @@ struct mac802154_llsec_key { /* one tfm for each authsize (4/8/16) */ struct crypto_aead *tfm[3]; - struct crypto_skcipher *tfm0; + struct crypto_sync_skcipher *tfm0; struct kref ref; }; From patchwork Wed Sep 19 02:10:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605167 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 594E913AD for ; Wed, 19 Sep 2018 02:12:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 489CE2BF25 for ; Wed, 19 Sep 2018 02:12:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3D01A2BF67; Wed, 19 Sep 2018 02:12:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A67F32BF25 for ; Wed, 19 Sep 2018 02:12:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730928AbeISHqm (ORCPT ); Wed, 19 Sep 2018 03:46:42 -0400 Received: from mail-pf1-f193.google.com ([209.85.210.193]:43397 "EHLO mail-pf1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730909AbeISHql (ORCPT ); Wed, 19 Sep 2018 03:46:41 -0400 Received: by mail-pf1-f193.google.com with SMTP id j26-v6so1902125pfi.10 for ; Tue, 18 Sep 2018 19:11:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=AEAcoMGIsRFw3HkHQ8e0m7OE5b3TEmxIotDoO7+No/8=; b=N8M0ZGQT3rCZpc1OoPovmvArnrFwprTe9JNn6ycx5Cw8pVb3cFHqq4p4E+HyrgQEbh S7pnMd9Kj8/2Hh0+fj0KVndCOL2DVg/B720g+z6jkcqWlXCov3vJ4IWRMP0SDODuls70 b5eNOmvWHisjFQsxvXuzwQTuu4RY9M1MeZNec= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=AEAcoMGIsRFw3HkHQ8e0m7OE5b3TEmxIotDoO7+No/8=; b=VKWAMno1qkXfykrDLfxiL1BTPzqTprvMZ7z7wNdQyfkq4SpF1WGSp0NAEo2SWu3bX9 vqadM02HrkLBAJjHUvBAXaaUJBpd2ttq1thYKohM8MI5LqsIysej+PbnR+CDTMA0ZR41 1vIENGjWgFalZbZ+DqZdluDuHmWOFo6GW/NJUudBk+kJBxYRXEU2bs1nroSUlfUYbHwO dP4hygvME1BjX2Y74w7U81Mwfw/Drn2yeCePUXmfLm8qAZrXea2ZqJKuZhau8mPHOsPj lnLeBZnUuNhx5xK/yGcFk3hVKzpHA2G8FlS1lzNxi4V3tTTGfMTidhYFzVt6QfGVMHL5 InEQ== X-Gm-Message-State: APzg51Bmy+MxgeY1F8S8F5Z6/P27j7cxBKKpgrStdtwJnDzVZ2lif9so 0RvLEfY7oeCr2JbIt+XziIdlfQ== X-Google-Smtp-Source: ANB0VdYA+0hYiNYMdb41d/qf3UBxV53/m+6dISiJ/SN9OXRzpB5arDd1EyJoaGQKVd9ySrRHp9PaUA== X-Received: by 2002:a62:4bc6:: with SMTP id d67-v6mr33557881pfj.175.1537323073476; Tue, 18 Sep 2018 19:11:13 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id z11-v6sm26301540pff.162.2018.09.18.19.11.07 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:11:10 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , Martin Schwidefsky , Heiko Carstens , linux-s390@vger.kernel.org, Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 05/23] s390/crypto: Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:42 -0700 Message-Id: <20180919021100.3380-6-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: Martin Schwidefsky Cc: Heiko Carstens Cc: linux-s390@vger.kernel.org Signed-off-by: Kees Cook --- arch/s390/crypto/aes_s390.c | 48 ++++++++++++++++++------------------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c index c54cb26eb7f5..812d9498d97b 100644 --- a/arch/s390/crypto/aes_s390.c +++ b/arch/s390/crypto/aes_s390.c @@ -44,7 +44,7 @@ struct s390_aes_ctx { int key_len; unsigned long fc; union { - struct crypto_skcipher *blk; + struct crypto_sync_skcipher *blk; struct crypto_cipher *cip; } fallback; }; @@ -54,7 +54,7 @@ struct s390_xts_ctx { u8 pcc_key[32]; int key_len; unsigned long fc; - struct crypto_skcipher *fallback; + struct crypto_sync_skcipher *fallback; }; struct gcm_sg_walk { @@ -184,14 +184,15 @@ static int setkey_fallback_blk(struct crypto_tfm *tfm, const u8 *key, struct s390_aes_ctx *sctx = crypto_tfm_ctx(tfm); unsigned int ret; - crypto_skcipher_clear_flags(sctx->fallback.blk, CRYPTO_TFM_REQ_MASK); - crypto_skcipher_set_flags(sctx->fallback.blk, tfm->crt_flags & + crypto_sync_skcipher_clear_flags(sctx->fallback.blk, + CRYPTO_TFM_REQ_MASK); + crypto_sync_skcipher_set_flags(sctx->fallback.blk, tfm->crt_flags & CRYPTO_TFM_REQ_MASK); - ret = crypto_skcipher_setkey(sctx->fallback.blk, key, len); + ret = crypto_sync_skcipher_setkey(sctx->fallback.blk, key, len); tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK; - tfm->crt_flags |= crypto_skcipher_get_flags(sctx->fallback.blk) & + tfm->crt_flags |= crypto_sync_skcipher_get_flags(sctx->fallback.blk) & CRYPTO_TFM_RES_MASK; return ret; @@ -204,9 +205,9 @@ static int fallback_blk_dec(struct blkcipher_desc *desc, unsigned int ret; struct crypto_blkcipher *tfm = desc->tfm; struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(tfm); - SKCIPHER_REQUEST_ON_STACK(req, sctx->fallback.blk); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, sctx->fallback.blk); - skcipher_request_set_tfm(req, sctx->fallback.blk); + skcipher_request_set_sync_tfm(req, sctx->fallback.blk); skcipher_request_set_callback(req, desc->flags, NULL, NULL); skcipher_request_set_crypt(req, src, dst, nbytes, desc->info); @@ -223,9 +224,9 @@ static int fallback_blk_enc(struct blkcipher_desc *desc, unsigned int ret; struct crypto_blkcipher *tfm = desc->tfm; struct s390_aes_ctx *sctx = crypto_blkcipher_ctx(tfm); - SKCIPHER_REQUEST_ON_STACK(req, sctx->fallback.blk); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, sctx->fallback.blk); - skcipher_request_set_tfm(req, sctx->fallback.blk); + skcipher_request_set_sync_tfm(req, sctx->fallback.blk); skcipher_request_set_callback(req, desc->flags, NULL, NULL); skcipher_request_set_crypt(req, src, dst, nbytes, desc->info); @@ -306,8 +307,7 @@ static int fallback_init_blk(struct crypto_tfm *tfm) const char *name = tfm->__crt_alg->cra_name; struct s390_aes_ctx *sctx = crypto_tfm_ctx(tfm); - sctx->fallback.blk = crypto_alloc_skcipher(name, 0, - CRYPTO_ALG_ASYNC | + sctx->fallback.blk = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(sctx->fallback.blk)) { @@ -323,7 +323,7 @@ static void fallback_exit_blk(struct crypto_tfm *tfm) { struct s390_aes_ctx *sctx = crypto_tfm_ctx(tfm); - crypto_free_skcipher(sctx->fallback.blk); + crypto_free_sync_skcipher(sctx->fallback.blk); } static struct crypto_alg ecb_aes_alg = { @@ -453,14 +453,15 @@ static int xts_fallback_setkey(struct crypto_tfm *tfm, const u8 *key, struct s390_xts_ctx *xts_ctx = crypto_tfm_ctx(tfm); unsigned int ret; - crypto_skcipher_clear_flags(xts_ctx->fallback, CRYPTO_TFM_REQ_MASK); - crypto_skcipher_set_flags(xts_ctx->fallback, tfm->crt_flags & + crypto_sync_skcipher_clear_flags(xts_ctx->fallback, + CRYPTO_TFM_REQ_MASK); + crypto_sync_skcipher_set_flags(xts_ctx->fallback, tfm->crt_flags & CRYPTO_TFM_REQ_MASK); - ret = crypto_skcipher_setkey(xts_ctx->fallback, key, len); + ret = crypto_sync_skcipher_setkey(xts_ctx->fallback, key, len); tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK; - tfm->crt_flags |= crypto_skcipher_get_flags(xts_ctx->fallback) & + tfm->crt_flags |= crypto_sync_skcipher_get_flags(xts_ctx->fallback) & CRYPTO_TFM_RES_MASK; return ret; @@ -472,10 +473,10 @@ static int xts_fallback_decrypt(struct blkcipher_desc *desc, { struct crypto_blkcipher *tfm = desc->tfm; struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(tfm); - SKCIPHER_REQUEST_ON_STACK(req, xts_ctx->fallback); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, xts_ctx->fallback); unsigned int ret; - skcipher_request_set_tfm(req, xts_ctx->fallback); + skcipher_request_set_sync_tfm(req, xts_ctx->fallback); skcipher_request_set_callback(req, desc->flags, NULL, NULL); skcipher_request_set_crypt(req, src, dst, nbytes, desc->info); @@ -491,10 +492,10 @@ static int xts_fallback_encrypt(struct blkcipher_desc *desc, { struct crypto_blkcipher *tfm = desc->tfm; struct s390_xts_ctx *xts_ctx = crypto_blkcipher_ctx(tfm); - SKCIPHER_REQUEST_ON_STACK(req, xts_ctx->fallback); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, xts_ctx->fallback); unsigned int ret; - skcipher_request_set_tfm(req, xts_ctx->fallback); + skcipher_request_set_sync_tfm(req, xts_ctx->fallback); skcipher_request_set_callback(req, desc->flags, NULL, NULL); skcipher_request_set_crypt(req, src, dst, nbytes, desc->info); @@ -611,8 +612,7 @@ static int xts_fallback_init(struct crypto_tfm *tfm) const char *name = tfm->__crt_alg->cra_name; struct s390_xts_ctx *xts_ctx = crypto_tfm_ctx(tfm); - xts_ctx->fallback = crypto_alloc_skcipher(name, 0, - CRYPTO_ALG_ASYNC | + xts_ctx->fallback = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(xts_ctx->fallback)) { @@ -627,7 +627,7 @@ static void xts_fallback_exit(struct crypto_tfm *tfm) { struct s390_xts_ctx *xts_ctx = crypto_tfm_ctx(tfm); - crypto_free_skcipher(xts_ctx->fallback); + crypto_free_sync_skcipher(xts_ctx->fallback); } static struct crypto_alg xts_aes_alg = { From patchwork Wed Sep 19 02:10:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605177 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4331E13AD for ; Wed, 19 Sep 2018 02:12:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 31D8F2BF25 for ; Wed, 19 Sep 2018 02:12:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 25ECD2BFC2; Wed, 19 Sep 2018 02:12:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B627E2BF25 for ; Wed, 19 Sep 2018 02:12:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730845AbeISHsP (ORCPT ); Wed, 19 Sep 2018 03:48:15 -0400 Received: from mail-pg1-f196.google.com ([209.85.215.196]:36825 "EHLO mail-pg1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730882AbeISHqj (ORCPT ); Wed, 19 Sep 2018 03:46:39 -0400 Received: by mail-pg1-f196.google.com with SMTP id d1-v6so1945378pgo.3 for ; Tue, 18 Sep 2018 19:11:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ROtVbEYXO0Dkld82te80QOJw3jccOmR4EUaG6Lwr8OI=; b=aPHTrHMwWKXGdR2E/H1C/wj4Uor72+7iP3c78+k8+hL1nQ55tt1DCOdl9wKA2DqVsZ tVJxw0f584K8BjVJVKwPPDkLk+OhuJEOJYRK+0kG5iVckl/24Gfu1pSFK5Nif4ByJu42 /8qV8MrT3qj9QQFDR0zxpRXnCWbM0uZxKFOu4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ROtVbEYXO0Dkld82te80QOJw3jccOmR4EUaG6Lwr8OI=; b=iXiAK91MEWdy5VHogl3q5LQl5PTwsGwB/5qKURstwk1YUQwHiSoGuR1pcMfg9yp4Vp LJ/km/bSeQfqE7we+ygjXhomhth3LQBzaHtQoT2fzBRP2sTcKMLJKCGBZU3Y4gAGc4qi cBAwqxIO4QWQdwIUFG+dxf8QVyMvzccx+eNF6qDzOMnttmWKUmWn7BYEVlgecOCrO/Ee XJ9wxS9lgTHbYxjqgnjpFxeDhqvsaK0Bs4zg5neHZVybBJDpVgB+VvnatzAQSuTrK8yL pq9TnqckCKGrp1O878s06/R5sOWKgVv8e4hp03Nbj9D+jDpTsD+B7o6f7ulTG1/RDPrN L/mw== X-Gm-Message-State: APzg51Dj64LyQI1+87A9d+A9NptwHytUWyOdounPA01zKthm3vsHfhzc g0sr/NX0l/GjvkQk0X/hTMjjBw== X-Google-Smtp-Source: ANB0VdanV2hPHPefDr0WprjOOk5d1CNZob8JS+8Q5LHnJ0o01nDdQ8lYpiB5QnrnSquIcffhQJJO/Q== X-Received: by 2002:a63:d90b:: with SMTP id r11-v6mr30428562pgg.315.1537323071718; Tue, 18 Sep 2018 19:11:11 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id i5-v6sm33157187pfe.140.2018.09.18.19.11.07 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:11:10 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , x86@kernel.org, Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 06/23] x86/fpu: Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:43 -0700 Message-Id: <20180919021100.3380-7-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: x86@kernel.org Signed-off-by: Kees Cook Reviewed-by: Ard Biesheuvel --- arch/x86/crypto/fpu.c | 30 ++++++++++++++++-------------- 1 file changed, 16 insertions(+), 14 deletions(-) diff --git a/arch/x86/crypto/fpu.c b/arch/x86/crypto/fpu.c index 406680476c52..be9b3766f241 100644 --- a/arch/x86/crypto/fpu.c +++ b/arch/x86/crypto/fpu.c @@ -20,21 +20,23 @@ #include struct crypto_fpu_ctx { - struct crypto_skcipher *child; + struct crypto_sync_skcipher *child; }; static int crypto_fpu_setkey(struct crypto_skcipher *parent, const u8 *key, unsigned int keylen) { struct crypto_fpu_ctx *ctx = crypto_skcipher_ctx(parent); - struct crypto_skcipher *child = ctx->child; + struct crypto_sync_skcipher *child = ctx->child; int err; - crypto_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK); - crypto_skcipher_set_flags(child, crypto_skcipher_get_flags(parent) & + crypto_sync_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK); + crypto_sync_skcipher_set_flags(child, + crypto_skcipher_get_flags(parent) & CRYPTO_TFM_REQ_MASK); - err = crypto_skcipher_setkey(child, key, keylen); - crypto_skcipher_set_flags(parent, crypto_skcipher_get_flags(child) & + err = crypto_sync_skcipher_setkey(child, key, keylen); + crypto_skcipher_set_flags(parent, + crypto_sync_skcipher_get_flags(child) & CRYPTO_TFM_RES_MASK); return err; } @@ -43,11 +45,11 @@ static int crypto_fpu_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct crypto_fpu_ctx *ctx = crypto_skcipher_ctx(tfm); - struct crypto_skcipher *child = ctx->child; - SKCIPHER_REQUEST_ON_STACK(subreq, child); + struct crypto_sync_skcipher *child = ctx->child; + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, child); int err; - skcipher_request_set_tfm(subreq, child); + skcipher_request_set_sync_tfm(subreq, child); skcipher_request_set_callback(subreq, 0, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, req->iv); @@ -64,11 +66,11 @@ static int crypto_fpu_decrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct crypto_fpu_ctx *ctx = crypto_skcipher_ctx(tfm); - struct crypto_skcipher *child = ctx->child; - SKCIPHER_REQUEST_ON_STACK(subreq, child); + struct crypto_sync_skcipher *child = ctx->child; + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, child); int err; - skcipher_request_set_tfm(subreq, child); + skcipher_request_set_sync_tfm(subreq, child); skcipher_request_set_callback(subreq, 0, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, req->iv); @@ -93,7 +95,7 @@ static int crypto_fpu_init_tfm(struct crypto_skcipher *tfm) if (IS_ERR(cipher)) return PTR_ERR(cipher); - ctx->child = cipher; + ctx->child = (struct crypto_sync_skcipher *)cipher; return 0; } @@ -102,7 +104,7 @@ static void crypto_fpu_exit_tfm(struct crypto_skcipher *tfm) { struct crypto_fpu_ctx *ctx = crypto_skcipher_ctx(tfm); - crypto_free_skcipher(ctx->child); + crypto_free_sync_skcipher(ctx->child); } static void crypto_fpu_free(struct skcipher_instance *inst) From patchwork Wed Sep 19 02:10:44 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605169 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2DBD16CB for ; Wed, 19 Sep 2018 02:12:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1E2742BF25 for ; Wed, 19 Sep 2018 02:12:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 127282BF67; Wed, 19 Sep 2018 02:12:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9092C2BF25 for ; Wed, 19 Sep 2018 02:12:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730796AbeISHsB (ORCPT ); Wed, 19 Sep 2018 03:48:01 -0400 Received: from mail-pg1-f193.google.com ([209.85.215.193]:38675 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730894AbeISHqk (ORCPT ); Wed, 19 Sep 2018 03:46:40 -0400 Received: by mail-pg1-f193.google.com with SMTP id t84-v6so1942012pgb.5 for ; Tue, 18 Sep 2018 19:11:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=DInXt2vSOmIL9OTMoezsciUOBSCf6CUqco9YDUIBOqw=; b=BSRyD3jFcNmp6pnmIBWpkRXPEmmaVHKzQsckwdvd0dDT0FEcde+t5PYdBfXmpCpuy9 KJ2Lwzg+TaR4YEpI6HXcWuYIOEbnztZOddMTbdO355111jFH0X7ZPgBpxOIb2qpHAD1a Lp/D0LZA7ECfgM5ETR6eAA2Q+NsF2OWGD/Brk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=DInXt2vSOmIL9OTMoezsciUOBSCf6CUqco9YDUIBOqw=; b=RAg3qLavH4FNyHe3BhGWock1TEADVtAMJ85yUkPaQIyyuSzC2AHTbqINcenU4cDX9l rczcj15OzF5RS3AiZGnb+X1gntawsYoXhY/YoW5iQbiCij3cd74UYQ3mBocTdmgHI9xR En3PAMwAZlRZW4IZLue6K856YVUXhF/QyROFHSHc2rdNZLBZJXAR/F/qkqsqfKRu7Xol dD01o0Fj3LEtsEZGY9Azilbhz+whJ5RzthvfNTXigG4XuW4JZhCgZ4E6Q6sTCDUu6Zon zxes6hkY3AZI5KohPhSBUnCsYf4dNYGMYdto5cfGv++8jAcD6aM32ZbpTEYC+nz4ZOS/ UhOQ== X-Gm-Message-State: APzg51B2VcRzyliu1tMWElQ7R70aSDQvQKewckebnvsgd3cuu7WLohL9 zm4iUf3ghtTe4UGqVb9pcbrMqg== X-Google-Smtp-Source: ANB0VdaCHMUFXdpSyoQ0bc3mslk7pYL4TXz5Y/ywvtFbggaibzrUDN+QS0eV1z1pQ5+0tcFqBzRH6g== X-Received: by 2002:a62:571b:: with SMTP id l27-v6mr33431544pfb.29.1537323072563; Tue, 18 Sep 2018 19:11:12 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id c1-v6sm32084957pfg.25.2018.09.18.19.11.08 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:11:10 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , Jens Axboe , linux-block@vger.kernel.org, Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 07/23] block: cryptoloop: Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:44 -0700 Message-Id: <20180919021100.3380-8-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: Jens Axboe Cc: linux-block@vger.kernel.org Signed-off-by: Kees Cook Acked-by: Ard Biesheuvel --- drivers/block/cryptoloop.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/drivers/block/cryptoloop.c b/drivers/block/cryptoloop.c index 7033a4beda66..254ee7d54e91 100644 --- a/drivers/block/cryptoloop.c +++ b/drivers/block/cryptoloop.c @@ -45,7 +45,7 @@ cryptoloop_init(struct loop_device *lo, const struct loop_info64 *info) char cms[LO_NAME_SIZE]; /* cipher-mode string */ char *mode; char *cmsp = cms; /* c-m string pointer */ - struct crypto_skcipher *tfm; + struct crypto_sync_skcipher *tfm; /* encryption breaks for non sector aligned offsets */ @@ -80,13 +80,13 @@ cryptoloop_init(struct loop_device *lo, const struct loop_info64 *info) *cmsp++ = ')'; *cmsp = 0; - tfm = crypto_alloc_skcipher(cms, 0, CRYPTO_ALG_ASYNC); + tfm = crypto_alloc_sync_skcipher(cms, 0, 0); if (IS_ERR(tfm)) return PTR_ERR(tfm); - err = crypto_skcipher_setkey(tfm, info->lo_encrypt_key, - info->lo_encrypt_key_size); - + err = crypto_sync_skcipher_setkey(tfm, info->lo_encrypt_key, + info->lo_encrypt_key_size); + if (err != 0) goto out_free_tfm; @@ -94,7 +94,7 @@ cryptoloop_init(struct loop_device *lo, const struct loop_info64 *info) return 0; out_free_tfm: - crypto_free_skcipher(tfm); + crypto_free_sync_skcipher(tfm); out: return err; @@ -109,8 +109,8 @@ cryptoloop_transfer(struct loop_device *lo, int cmd, struct page *loop_page, unsigned loop_off, int size, sector_t IV) { - struct crypto_skcipher *tfm = lo->key_data; - SKCIPHER_REQUEST_ON_STACK(req, tfm); + struct crypto_sync_skcipher *tfm = lo->key_data; + SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm); struct scatterlist sg_out; struct scatterlist sg_in; @@ -119,7 +119,7 @@ cryptoloop_transfer(struct loop_device *lo, int cmd, unsigned in_offs, out_offs; int err; - skcipher_request_set_tfm(req, tfm); + skcipher_request_set_sync_tfm(req, tfm); skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL); @@ -175,9 +175,9 @@ cryptoloop_ioctl(struct loop_device *lo, int cmd, unsigned long arg) static int cryptoloop_release(struct loop_device *lo) { - struct crypto_skcipher *tfm = lo->key_data; + struct crypto_sync_skcipher *tfm = lo->key_data; if (tfm != NULL) { - crypto_free_skcipher(tfm); + crypto_free_sync_skcipher(tfm); lo->key_data = NULL; return 0; } From patchwork Wed Sep 19 02:10:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605163 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 449366CB for ; Wed, 19 Sep 2018 02:12:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 31DA52BF25 for ; Wed, 19 Sep 2018 02:12:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 25E4E2BF67; Wed, 19 Sep 2018 02:12:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B7D5A2BF25 for ; Wed, 19 Sep 2018 02:12:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730932AbeISHqm (ORCPT ); Wed, 19 Sep 2018 03:46:42 -0400 Received: from mail-pf1-f195.google.com ([209.85.210.195]:38876 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730922AbeISHql (ORCPT ); Wed, 19 Sep 2018 03:46:41 -0400 Received: by mail-pf1-f195.google.com with SMTP id x17-v6so1910769pfh.5 for ; Tue, 18 Sep 2018 19:11:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=hwkPL0ZYuWobYJ2p0KIuxBRAKmWLxNc/NezeIPnQbXk=; b=GaWxCJtkWhH496IZyhfS5PzIxv9aHnlLttPIkFuwIfU73wdbGEr2Lk8mNxPhMX4YNL LWBafl/U8ME5OuDnx96B5cWp+kcDtAlEQUxxHIjxdr2dehTq4kvvzXGfOK5SAykqsVtZ uz5/eLc0xG4pkDKttA3GfGEmj626IyACOkWEo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=hwkPL0ZYuWobYJ2p0KIuxBRAKmWLxNc/NezeIPnQbXk=; b=Fbmb5xIvv19MTywHBl2gMyDig/NUJ6D71dK04V53qWWrvm0YdPYi0v7A7fw/nE9KdJ ugN+HR/hGmz/Bk41gNHLzScLmUd98IAlqHzfqBfkOTte2LenCnUdVVWkh0z/h6YMCnNi yD+5q+rYjeVNhJKsWIOeVbUlOZIWPL47064/2KenVZlOKOacaAuax5ssvGc8xp03LiZq /AJyIAG5dyosLrImukwstTWBPySC4Pf8KEIjFo/2+btPKJRdjJIaEaVVHbtT0AUQVRYD NInKRn4V/cJ14APUD1BSEvZZo2kFU4vSE4IPU49mcgOwrr4X9Us5xAADPey0oCGTjUqd WYwQ== X-Gm-Message-State: APzg51BupouipTyUzjdrDovU4f92gLYuMqyVPwk8ZYOpp8EruRqvZoae y5GB5Ga24u0GyuBc65uFv6UxeQ== X-Google-Smtp-Source: ANB0VdaZupOgVvwTaGqTJ/pHIHBUyHlmAQyaO4TEXXba5hOmUuZSar9lF+r2iGhAMsGNKJnGQjQc2g== X-Received: by 2002:a62:4dc1:: with SMTP id a184-v6mr33424032pfb.5.1537323074371; Tue, 18 Sep 2018 19:11:14 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id h132-v6sm27112806pfc.100.2018.09.18.19.11.08 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:11:10 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , Ilya Dryomov , "Yan, Zheng" , Sage Weil , ceph-devel@vger.kernel.org, Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 08/23] libceph: Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:45 -0700 Message-Id: <20180919021100.3380-9-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: Ilya Dryomov Cc: "Yan, Zheng" Cc: Sage Weil Cc: ceph-devel@vger.kernel.org Signed-off-by: Kees Cook --- net/ceph/crypto.c | 12 ++++++------ net/ceph/crypto.h | 2 +- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/net/ceph/crypto.c b/net/ceph/crypto.c index 02172c408ff2..5d6724cee38f 100644 --- a/net/ceph/crypto.c +++ b/net/ceph/crypto.c @@ -46,9 +46,9 @@ static int set_secret(struct ceph_crypto_key *key, void *buf) goto fail; } - /* crypto_alloc_skcipher() allocates with GFP_KERNEL */ + /* crypto_alloc_sync_skcipher() allocates with GFP_KERNEL */ noio_flag = memalloc_noio_save(); - key->tfm = crypto_alloc_skcipher("cbc(aes)", 0, CRYPTO_ALG_ASYNC); + key->tfm = crypto_alloc_sync_skcipher("cbc(aes)", 0, 0); memalloc_noio_restore(noio_flag); if (IS_ERR(key->tfm)) { ret = PTR_ERR(key->tfm); @@ -56,7 +56,7 @@ static int set_secret(struct ceph_crypto_key *key, void *buf) goto fail; } - ret = crypto_skcipher_setkey(key->tfm, key->key, key->len); + ret = crypto_sync_skcipher_setkey(key->tfm, key->key, key->len); if (ret) goto fail; @@ -136,7 +136,7 @@ void ceph_crypto_key_destroy(struct ceph_crypto_key *key) if (key) { kfree(key->key); key->key = NULL; - crypto_free_skcipher(key->tfm); + crypto_free_sync_skcipher(key->tfm); key->tfm = NULL; } } @@ -216,7 +216,7 @@ static void teardown_sgtable(struct sg_table *sgt) static int ceph_aes_crypt(const struct ceph_crypto_key *key, bool encrypt, void *buf, int buf_len, int in_len, int *pout_len) { - SKCIPHER_REQUEST_ON_STACK(req, key->tfm); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, key->tfm); struct sg_table sgt; struct scatterlist prealloc_sg; char iv[AES_BLOCK_SIZE] __aligned(8); @@ -232,7 +232,7 @@ static int ceph_aes_crypt(const struct ceph_crypto_key *key, bool encrypt, return ret; memcpy(iv, aes_iv, AES_BLOCK_SIZE); - skcipher_request_set_tfm(req, key->tfm); + skcipher_request_set_sync_tfm(req, key->tfm); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, sgt.sgl, sgt.sgl, crypt_len, iv); diff --git a/net/ceph/crypto.h b/net/ceph/crypto.h index bb45c7d43739..96ef4d860bc9 100644 --- a/net/ceph/crypto.h +++ b/net/ceph/crypto.h @@ -13,7 +13,7 @@ struct ceph_crypto_key { struct ceph_timespec created; int len; void *key; - struct crypto_skcipher *tfm; + struct crypto_sync_skcipher *tfm; }; int ceph_crypto_key_clone(struct ceph_crypto_key *dst, From patchwork Wed Sep 19 02:10:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605161 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 82ED213AD for ; Wed, 19 Sep 2018 02:12:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 721562ADCD for ; Wed, 19 Sep 2018 02:12:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 652A12BF4E; Wed, 19 Sep 2018 02:12:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DF8782ADCD for ; Wed, 19 Sep 2018 02:12:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730892AbeISHqm (ORCPT ); Wed, 19 Sep 2018 03:46:42 -0400 Received: from mail-pl1-f196.google.com ([209.85.214.196]:42206 "EHLO mail-pl1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730924AbeISHqm (ORCPT ); Wed, 19 Sep 2018 03:46:42 -0400 Received: by mail-pl1-f196.google.com with SMTP id g23-v6so1860191plq.9 for ; Tue, 18 Sep 2018 19:11:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=6R0mVhvKUSsGeG0bPrwWPq0HZqtET19tYibHdYTKJoo=; b=O8pCFocvtffRHZ/uoRO7aBNC7X+jZOVPyvSd6lewteWZ1LPqWcEDmEpLHd/VhddvUA 17ZQidvlB9IC8VFZdEry3JsrLEM0T93IpTs6GqL21fk4ig8UIai1B7fsYtoXoJVt9/N7 ZsbFoYp/QFVtk6B9V8ElhaCZgkNscNBKWDnoA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=6R0mVhvKUSsGeG0bPrwWPq0HZqtET19tYibHdYTKJoo=; b=bBfjdYom3QrfOKUD4xmGgMm+Mrt/5Tij/KK9TgiHvUjar4I07W5NPoGJxQ8pmsDHnv PT/aSc5oLwWb+alvPZXslUBve9zQ7YcGIhogc7UPoj+Y9Z/q4TOn3+fDeL80YFeUDsWv 2KWHh3dXP2szkPn0EkFuZK5PSJAHp/zWm6T7WdhP1GORXspLB/R006FfkeiJvHZ/NMDD CUSReAyzBjv8jsKuMV/UqIKdFLbqJCVU2YNRcuWV0nPBshwp1Wa4105NQgsSzXwDxjwg z8ocWUXoBvNzJltGoNx35BPsaztmddm/v3S2gJraF5LiMbLj3/st/VvzmNDs8b80po6T nGIQ== X-Gm-Message-State: APzg51DDPYDT82b/ljvShn04S8ovAdWhfy1w3DaYg/ZoRKwroyxkWfl7 s0JxAQK/pS0yLoF0kdxAS8bxyg== X-Google-Smtp-Source: ANB0Vdb/1NNTY4XmDVNOZnAe5ipfy62vN6Unfn0E81dPWCaTxQXLk6oTdGcpWFDb60+W55v8y89VZA== X-Received: by 2002:a17:902:a983:: with SMTP id bh3-v6mr31481010plb.245.1537323075219; Tue, 18 Sep 2018 19:11:15 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id o3-v6sm21387459pgv.53.2018.09.18.19.11.08 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:11:10 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , Paul Mackerras , linux-ppp@vger.kernel.org, Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 09/23] ppp: mppe: Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:46 -0700 Message-Id: <20180919021100.3380-10-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: Paul Mackerras Cc: linux-ppp@vger.kernel.org Signed-off-by: Kees Cook --- drivers/net/ppp/ppp_mppe.c | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/drivers/net/ppp/ppp_mppe.c b/drivers/net/ppp/ppp_mppe.c index a205750b431b..7ccdc62c6052 100644 --- a/drivers/net/ppp/ppp_mppe.c +++ b/drivers/net/ppp/ppp_mppe.c @@ -95,7 +95,7 @@ static inline void sha_pad_init(struct sha_pad *shapad) * State for an MPPE (de)compressor. */ struct ppp_mppe_state { - struct crypto_skcipher *arc4; + struct crypto_sync_skcipher *arc4; struct shash_desc *sha1; unsigned char *sha1_digest; unsigned char master_key[MPPE_MAX_KEY_LEN]; @@ -155,15 +155,15 @@ static void get_new_key_from_sha(struct ppp_mppe_state * state) static void mppe_rekey(struct ppp_mppe_state * state, int initial_key) { struct scatterlist sg_in[1], sg_out[1]; - SKCIPHER_REQUEST_ON_STACK(req, state->arc4); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, state->arc4); - skcipher_request_set_tfm(req, state->arc4); + skcipher_request_set_sync_tfm(req, state->arc4); skcipher_request_set_callback(req, 0, NULL, NULL); get_new_key_from_sha(state); if (!initial_key) { - crypto_skcipher_setkey(state->arc4, state->sha1_digest, - state->keylen); + crypto_sync_skcipher_setkey(state->arc4, state->sha1_digest, + state->keylen); sg_init_table(sg_in, 1); sg_init_table(sg_out, 1); setup_sg(sg_in, state->sha1_digest, state->keylen); @@ -181,7 +181,8 @@ static void mppe_rekey(struct ppp_mppe_state * state, int initial_key) state->session_key[1] = 0x26; state->session_key[2] = 0x9e; } - crypto_skcipher_setkey(state->arc4, state->session_key, state->keylen); + crypto_sync_skcipher_setkey(state->arc4, state->session_key, + state->keylen); skcipher_request_zero(req); } @@ -203,7 +204,7 @@ static void *mppe_alloc(unsigned char *options, int optlen) goto out; - state->arc4 = crypto_alloc_skcipher("ecb(arc4)", 0, CRYPTO_ALG_ASYNC); + state->arc4 = crypto_alloc_sync_skcipher("ecb(arc4)", 0, 0); if (IS_ERR(state->arc4)) { state->arc4 = NULL; goto out_free; @@ -250,7 +251,7 @@ static void *mppe_alloc(unsigned char *options, int optlen) crypto_free_shash(state->sha1->tfm); kzfree(state->sha1); } - crypto_free_skcipher(state->arc4); + crypto_free_sync_skcipher(state->arc4); kfree(state); out: return NULL; @@ -266,7 +267,7 @@ static void mppe_free(void *arg) kfree(state->sha1_digest); crypto_free_shash(state->sha1->tfm); kzfree(state->sha1); - crypto_free_skcipher(state->arc4); + crypto_free_sync_skcipher(state->arc4); kfree(state); } } @@ -366,7 +367,7 @@ mppe_compress(void *arg, unsigned char *ibuf, unsigned char *obuf, int isize, int osize) { struct ppp_mppe_state *state = (struct ppp_mppe_state *) arg; - SKCIPHER_REQUEST_ON_STACK(req, state->arc4); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, state->arc4); int proto; int err; struct scatterlist sg_in[1], sg_out[1]; @@ -426,7 +427,7 @@ mppe_compress(void *arg, unsigned char *ibuf, unsigned char *obuf, setup_sg(sg_in, ibuf, isize); setup_sg(sg_out, obuf, osize); - skcipher_request_set_tfm(req, state->arc4); + skcipher_request_set_sync_tfm(req, state->arc4); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, sg_in, sg_out, isize, NULL); err = crypto_skcipher_encrypt(req); @@ -480,7 +481,7 @@ mppe_decompress(void *arg, unsigned char *ibuf, int isize, unsigned char *obuf, int osize) { struct ppp_mppe_state *state = (struct ppp_mppe_state *) arg; - SKCIPHER_REQUEST_ON_STACK(req, state->arc4); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, state->arc4); unsigned ccount; int flushed = MPPE_BITS(ibuf) & MPPE_BIT_FLUSHED; struct scatterlist sg_in[1], sg_out[1]; @@ -615,7 +616,7 @@ mppe_decompress(void *arg, unsigned char *ibuf, int isize, unsigned char *obuf, setup_sg(sg_in, ibuf, 1); setup_sg(sg_out, obuf, 1); - skcipher_request_set_tfm(req, state->arc4); + skcipher_request_set_sync_tfm(req, state->arc4); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, sg_in, sg_out, 1, NULL); if (crypto_skcipher_decrypt(req)) { From patchwork Wed Sep 19 02:10:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605159 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0C30117E0 for ; Wed, 19 Sep 2018 02:12:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ED7EA2C00F for ; Wed, 19 Sep 2018 02:12:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EB81E2C080; Wed, 19 Sep 2018 02:12:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5689B2C0D5 for ; Wed, 19 Sep 2018 02:12:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730834AbeISHrh (ORCPT ); Wed, 19 Sep 2018 03:47:37 -0400 Received: from mail-pl1-f194.google.com ([209.85.214.194]:35267 "EHLO mail-pl1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730952AbeISHqn (ORCPT ); Wed, 19 Sep 2018 03:46:43 -0400 Received: by mail-pl1-f194.google.com with SMTP id g2-v6so1871649plo.2 for ; Tue, 18 Sep 2018 19:11:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=0npKhypEj5VJ9eArFb+OgsloY11mz5XIZSdSjPh1TKU=; b=TU30HTn/dycsfQZhEE3qKWBB9Xd84Ksv02d1wTerGD1242zBcMoYbLD5/qpDzpnxH3 /ebvBVUwYR5KdQV7dWBecT9YyeUy9VL4AlkBfzjitzc1JXdiGatnaRy6Y/cYR0jf22iQ gcHr3IPu5qRlqagZFGgcbGNxaEqmAmxc6yA7Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=0npKhypEj5VJ9eArFb+OgsloY11mz5XIZSdSjPh1TKU=; b=LOWnQ8BSIjTpKacBtVbBRRPBseNDfEKeQ7dw4S5SKH95jkSKJ37cDhvDLcGLfjVrkK vZvfiqT1fG1pqB7n4Ey+pevMWp9IpRoKfy2F1Z8d34vRh5C5rO+FLp6KshwWEwxUMWVZ zwicUqemROxSyUy7mwYcIHeqM50rSJ7YH4Y15NooA/UrBIr6HT7RR2H1yPIUYjbiUgwr MNN0ePEkxvaeV3esET+L06m+cnhI7wv24RX2cM0kVP0A1l/ZOAckEpQ1jphfZqgfXbzy S//RmleYLJOIZ7c1pLeqTlXVhJ1g0u/MH64JP9DpWcUIBzLYRvLqUwpXaDxPWnp6OehN hXRg== X-Gm-Message-State: APzg51DtrBMBPUlRDP61dQDb5lIytH+7EzmaPLSDl6v60IIxiAZ9wkBs kIWGfi7B3GbW1//fEVE3zYdDgg== X-Google-Smtp-Source: ANB0VdYiH3hMVUe8+8w7FxIaGwMj9Mabkd1Rp+CkD5xf12iTofOO4XhjnD0Ua3olZQc/kAa85BBsCg== X-Received: by 2002:a17:902:3041:: with SMTP id u59-v6mr31291639plb.99.1537323076117; Tue, 18 Sep 2018 19:11:16 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id j16-v6sm37140718pfk.125.2018.09.18.19.11.09 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:11:10 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , David Howells , linux-afs@lists.infradead.org, Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 10/23] rxrpc: Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:47 -0700 Message-Id: <20180919021100.3380-11-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: David Howells Cc: linux-afs@lists.infradead.org Signed-off-by: Kees Cook --- net/rxrpc/ar-internal.h | 2 +- net/rxrpc/rxkad.c | 44 ++++++++++++++++++++--------------------- 2 files changed, 23 insertions(+), 23 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index c97558710421..41be33c9eecf 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -442,7 +442,7 @@ struct rxrpc_connection { struct sk_buff_head rx_queue; /* received conn-level packets */ const struct rxrpc_security *security; /* applied security module */ struct key *server_key; /* security for this service */ - struct crypto_skcipher *cipher; /* encryption handle */ + struct crypto_sync_skcipher *cipher; /* encryption handle */ struct rxrpc_crypt csum_iv; /* packet checksum base */ unsigned long flags; unsigned long events; diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c index cea16838d588..cbef9ea43dec 100644 --- a/net/rxrpc/rxkad.c +++ b/net/rxrpc/rxkad.c @@ -46,7 +46,7 @@ struct rxkad_level2_hdr { * alloc routine, but since we have it to hand, we use it to decrypt RESPONSE * packets */ -static struct crypto_skcipher *rxkad_ci; +static struct crypto_sync_skcipher *rxkad_ci; static DEFINE_MUTEX(rxkad_ci_mutex); /* @@ -54,7 +54,7 @@ static DEFINE_MUTEX(rxkad_ci_mutex); */ static int rxkad_init_connection_security(struct rxrpc_connection *conn) { - struct crypto_skcipher *ci; + struct crypto_sync_skcipher *ci; struct rxrpc_key_token *token; int ret; @@ -63,14 +63,14 @@ static int rxkad_init_connection_security(struct rxrpc_connection *conn) token = conn->params.key->payload.data[0]; conn->security_ix = token->security_index; - ci = crypto_alloc_skcipher("pcbc(fcrypt)", 0, CRYPTO_ALG_ASYNC); + ci = crypto_alloc_sync_skcipher("pcbc(fcrypt)", 0, 0); if (IS_ERR(ci)) { _debug("no cipher"); ret = PTR_ERR(ci); goto error; } - if (crypto_skcipher_setkey(ci, token->kad->session_key, + if (crypto_sync_skcipher_setkey(ci, token->kad->session_key, sizeof(token->kad->session_key)) < 0) BUG(); @@ -104,7 +104,7 @@ static int rxkad_init_connection_security(struct rxrpc_connection *conn) static int rxkad_prime_packet_security(struct rxrpc_connection *conn) { struct rxrpc_key_token *token; - SKCIPHER_REQUEST_ON_STACK(req, conn->cipher); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, conn->cipher); struct scatterlist sg; struct rxrpc_crypt iv; __be32 *tmpbuf; @@ -128,7 +128,7 @@ static int rxkad_prime_packet_security(struct rxrpc_connection *conn) tmpbuf[3] = htonl(conn->security_ix); sg_init_one(&sg, tmpbuf, tmpsize); - skcipher_request_set_tfm(req, conn->cipher); + skcipher_request_set_sync_tfm(req, conn->cipher); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, &sg, &sg, tmpsize, iv.x); crypto_skcipher_encrypt(req); @@ -167,7 +167,7 @@ static int rxkad_secure_packet_auth(const struct rxrpc_call *call, memset(&iv, 0, sizeof(iv)); sg_init_one(&sg, sechdr, 8); - skcipher_request_set_tfm(req, call->conn->cipher); + skcipher_request_set_sync_tfm(req, call->conn->cipher); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, &sg, &sg, 8, iv.x); crypto_skcipher_encrypt(req); @@ -212,7 +212,7 @@ static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call, memcpy(&iv, token->kad->session_key, sizeof(iv)); sg_init_one(&sg[0], sechdr, sizeof(rxkhdr)); - skcipher_request_set_tfm(req, call->conn->cipher); + skcipher_request_set_sync_tfm(req, call->conn->cipher); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, &sg[0], &sg[0], sizeof(rxkhdr), iv.x); crypto_skcipher_encrypt(req); @@ -250,7 +250,7 @@ static int rxkad_secure_packet(struct rxrpc_call *call, void *sechdr) { struct rxrpc_skb_priv *sp; - SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher); struct rxrpc_crypt iv; struct scatterlist sg; u32 x, y; @@ -279,7 +279,7 @@ static int rxkad_secure_packet(struct rxrpc_call *call, call->crypto_buf[1] = htonl(x); sg_init_one(&sg, call->crypto_buf, 8); - skcipher_request_set_tfm(req, call->conn->cipher); + skcipher_request_set_sync_tfm(req, call->conn->cipher); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, &sg, &sg, 8, iv.x); crypto_skcipher_encrypt(req); @@ -352,7 +352,7 @@ static int rxkad_verify_packet_1(struct rxrpc_call *call, struct sk_buff *skb, /* start the decryption afresh */ memset(&iv, 0, sizeof(iv)); - skcipher_request_set_tfm(req, call->conn->cipher); + skcipher_request_set_sync_tfm(req, call->conn->cipher); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, sg, sg, 8, iv.x); crypto_skcipher_decrypt(req); @@ -450,7 +450,7 @@ static int rxkad_verify_packet_2(struct rxrpc_call *call, struct sk_buff *skb, token = call->conn->params.key->payload.data[0]; memcpy(&iv, token->kad->session_key, sizeof(iv)); - skcipher_request_set_tfm(req, call->conn->cipher); + skcipher_request_set_sync_tfm(req, call->conn->cipher); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, sg, sg, len, iv.x); crypto_skcipher_decrypt(req); @@ -506,7 +506,7 @@ static int rxkad_verify_packet(struct rxrpc_call *call, struct sk_buff *skb, unsigned int offset, unsigned int len, rxrpc_seq_t seq, u16 expected_cksum) { - SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher); struct rxrpc_crypt iv; struct scatterlist sg; bool aborted; @@ -529,7 +529,7 @@ static int rxkad_verify_packet(struct rxrpc_call *call, struct sk_buff *skb, call->crypto_buf[1] = htonl(x); sg_init_one(&sg, call->crypto_buf, 8); - skcipher_request_set_tfm(req, call->conn->cipher); + skcipher_request_set_sync_tfm(req, call->conn->cipher); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, &sg, &sg, 8, iv.x); crypto_skcipher_encrypt(req); @@ -755,7 +755,7 @@ static void rxkad_encrypt_response(struct rxrpc_connection *conn, struct rxkad_response *resp, const struct rxkad_key *s2) { - SKCIPHER_REQUEST_ON_STACK(req, conn->cipher); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, conn->cipher); struct rxrpc_crypt iv; struct scatterlist sg[1]; @@ -764,7 +764,7 @@ static void rxkad_encrypt_response(struct rxrpc_connection *conn, sg_init_table(sg, 1); sg_set_buf(sg, &resp->encrypted, sizeof(resp->encrypted)); - skcipher_request_set_tfm(req, conn->cipher); + skcipher_request_set_sync_tfm(req, conn->cipher); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, sg, sg, sizeof(resp->encrypted), iv.x); crypto_skcipher_encrypt(req); @@ -1021,7 +1021,7 @@ static void rxkad_decrypt_response(struct rxrpc_connection *conn, struct rxkad_response *resp, const struct rxrpc_crypt *session_key) { - SKCIPHER_REQUEST_ON_STACK(req, rxkad_ci); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, rxkad_ci); struct scatterlist sg[1]; struct rxrpc_crypt iv; @@ -1031,7 +1031,7 @@ static void rxkad_decrypt_response(struct rxrpc_connection *conn, ASSERT(rxkad_ci != NULL); mutex_lock(&rxkad_ci_mutex); - if (crypto_skcipher_setkey(rxkad_ci, session_key->x, + if (crypto_sync_skcipher_setkey(rxkad_ci, session_key->x, sizeof(*session_key)) < 0) BUG(); @@ -1039,7 +1039,7 @@ static void rxkad_decrypt_response(struct rxrpc_connection *conn, sg_init_table(sg, 1); sg_set_buf(sg, &resp->encrypted, sizeof(resp->encrypted)); - skcipher_request_set_tfm(req, rxkad_ci); + skcipher_request_set_sync_tfm(req, rxkad_ci); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, sg, sg, sizeof(resp->encrypted), iv.x); crypto_skcipher_decrypt(req); @@ -1218,7 +1218,7 @@ static void rxkad_clear(struct rxrpc_connection *conn) _enter(""); if (conn->cipher) - crypto_free_skcipher(conn->cipher); + crypto_free_sync_skcipher(conn->cipher); } /* @@ -1228,7 +1228,7 @@ static int rxkad_init(void) { /* pin the cipher we need so that the crypto layer doesn't invoke * keventd to go get it */ - rxkad_ci = crypto_alloc_skcipher("pcbc(fcrypt)", 0, CRYPTO_ALG_ASYNC); + rxkad_ci = crypto_alloc_sync_skcipher("pcbc(fcrypt)", 0, 0); return PTR_ERR_OR_ZERO(rxkad_ci); } @@ -1238,7 +1238,7 @@ static int rxkad_init(void) static void rxkad_exit(void) { if (rxkad_ci) - crypto_free_skcipher(rxkad_ci); + crypto_free_sync_skcipher(rxkad_ci); } /* From patchwork Wed Sep 19 02:10:48 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605153 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 888DD13AD for ; Wed, 19 Sep 2018 02:12:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 79BAB2C0AC for ; Wed, 19 Sep 2018 02:12:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 776F12C0F0; Wed, 19 Sep 2018 02:12:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 656912C09B for ; Wed, 19 Sep 2018 02:12:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730683AbeISHr0 (ORCPT ); Wed, 19 Sep 2018 03:47:26 -0400 Received: from mail-pg1-f195.google.com ([209.85.215.195]:39220 "EHLO mail-pg1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730969AbeISHqq (ORCPT ); Wed, 19 Sep 2018 03:46:46 -0400 Received: by mail-pg1-f195.google.com with SMTP id i190-v6so1939895pgc.6 for ; Tue, 18 Sep 2018 19:11:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=8oNl/e9uTp41NIGJYGi79JHHDmF9+QFZKwzGixSKESY=; b=CunrNz7OuS/C/Vor9xnjWZnofRwOJg3CbCmk4yez1Yjn5gXxf2JQulpUX4y8NV3+en YgTa8mclnxwt5btJdIqlrKkMsk8aa8WlKOhw2USQFp51+oyU4hpospC3J1cyRI/CT9YR YHb1YZhJCeaE7VB5dVJmnjPpXzcK5GVfnMV2A= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=8oNl/e9uTp41NIGJYGi79JHHDmF9+QFZKwzGixSKESY=; b=tpHbJHC9/aLPGJh17D7nrnnwlv6Tk+N1sUjQHBZuGvRUn1TxUkzOBXHF6UIjZ5KpW3 3nM4ZJgmbGxNDSFzehhjfBbjwcdbRJ32itSlrjOWJt+H5kxTM5CEwnUl5clmL9ViPdcu mQ2WVbNoB25tFhbUHrsIZ/1vocqBJoBaFtAXkPatPU8MoJI/SBSm09UAyPFBFJX9B0rO 5eZ2rsc47dzvnpT8VJj/tZmHxIRnIqIQ42HN5W0iqBFdkR0+4z4H012f4Ozsfr0LBvqn aqgYdHKr/2dmFjqYuNY7wmM8KAhHy8U9PyINVEddBH7QeXdc4TloGorUQC6N+Yl03U8P TNbg== X-Gm-Message-State: APzg51Bnm13+xIEhQGzE33a1g4Fy/oErjMa8DwMZTTX5TsOcCJLHpSsr 4z3i/zfhZpXFjAxQ1InMFpzH1A== X-Google-Smtp-Source: ANB0VdYMWFZlvu8PfeEo6xNjTWXrtKIaUIAVkTDS9Rd3bnxaSEgfjvUm6bgkan8W6tAJ+KyD1ohUQg== X-Received: by 2002:a62:1d54:: with SMTP id d81-v6mr33652235pfd.139.1537323078902; Tue, 18 Sep 2018 19:11:18 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id y7-v6sm25063370pge.8.2018.09.18.19.11.10 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:11:17 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , Greg Kroah-Hartman , Felipe Balbi , Johan Hovold , "Gustavo A. R. Silva" , linux-usb@vger.kernel.org, Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 11/23] wusb: Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:48 -0700 Message-Id: <20180919021100.3380-12-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: Greg Kroah-Hartman Cc: Felipe Balbi Cc: Johan Hovold Cc: "Gustavo A. R. Silva" Cc: linux-usb@vger.kernel.org Signed-off-by: Kees Cook Acked-by: Greg Kroah-Hartman --- drivers/usb/wusbcore/crypto.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/drivers/usb/wusbcore/crypto.c b/drivers/usb/wusbcore/crypto.c index aff50eb09ca9..68ddee86a886 100644 --- a/drivers/usb/wusbcore/crypto.c +++ b/drivers/usb/wusbcore/crypto.c @@ -189,7 +189,7 @@ struct wusb_mac_scratch { * NOTE: blen is not aligned to a block size, we'll pad zeros, that's * what sg[4] is for. Maybe there is a smarter way to do this. */ -static int wusb_ccm_mac(struct crypto_skcipher *tfm_cbc, +static int wusb_ccm_mac(struct crypto_sync_skcipher *tfm_cbc, struct crypto_cipher *tfm_aes, struct wusb_mac_scratch *scratch, void *mic, @@ -198,7 +198,7 @@ static int wusb_ccm_mac(struct crypto_skcipher *tfm_cbc, size_t blen) { int result = 0; - SKCIPHER_REQUEST_ON_STACK(req, tfm_cbc); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, tfm_cbc); struct scatterlist sg[4], sg_dst; void *dst_buf; size_t dst_size; @@ -224,7 +224,7 @@ static int wusb_ccm_mac(struct crypto_skcipher *tfm_cbc, if (!dst_buf) goto error_dst_buf; - iv = kzalloc(crypto_skcipher_ivsize(tfm_cbc), GFP_KERNEL); + iv = kzalloc(crypto_sync_skcipher_ivsize(tfm_cbc), GFP_KERNEL); if (!iv) goto error_iv; @@ -251,7 +251,7 @@ static int wusb_ccm_mac(struct crypto_skcipher *tfm_cbc, sg_set_page(&sg[3], ZERO_PAGE(0), zero_padding, 0); sg_init_one(&sg_dst, dst_buf, dst_size); - skcipher_request_set_tfm(req, tfm_cbc); + skcipher_request_set_sync_tfm(req, tfm_cbc); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, sg, &sg_dst, dst_size, iv); result = crypto_skcipher_encrypt(req); @@ -298,19 +298,19 @@ ssize_t wusb_prf(void *out, size_t out_size, { ssize_t result, bytes = 0, bitr; struct aes_ccm_nonce n = *_n; - struct crypto_skcipher *tfm_cbc; + struct crypto_sync_skcipher *tfm_cbc; struct crypto_cipher *tfm_aes; struct wusb_mac_scratch *scratch; u64 sfn = 0; __le64 sfn_le; - tfm_cbc = crypto_alloc_skcipher("cbc(aes)", 0, CRYPTO_ALG_ASYNC); + tfm_cbc = crypto_alloc_sync_skcipher("cbc(aes)", 0, 0); if (IS_ERR(tfm_cbc)) { result = PTR_ERR(tfm_cbc); printk(KERN_ERR "E: can't load CBC(AES): %d\n", (int)result); goto error_alloc_cbc; } - result = crypto_skcipher_setkey(tfm_cbc, key, 16); + result = crypto_sync_skcipher_setkey(tfm_cbc, key, 16); if (result < 0) { printk(KERN_ERR "E: can't set CBC key: %d\n", (int)result); goto error_setkey_cbc; @@ -351,7 +351,7 @@ ssize_t wusb_prf(void *out, size_t out_size, crypto_free_cipher(tfm_aes); error_alloc_aes: error_setkey_cbc: - crypto_free_skcipher(tfm_cbc); + crypto_free_sync_skcipher(tfm_cbc); error_alloc_cbc: return result; } From patchwork Wed Sep 19 02:10:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605155 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 803A26CB for ; Wed, 19 Sep 2018 02:12:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 636DB2C0D2 for ; Wed, 19 Sep 2018 02:12:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 611362C0A3; Wed, 19 Sep 2018 02:12:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E6B562C0D7 for ; Wed, 19 Sep 2018 02:12:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730970AbeISHqq (ORCPT ); Wed, 19 Sep 2018 03:46:46 -0400 Received: from mail-pg1-f194.google.com ([209.85.215.194]:39218 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730963AbeISHqp (ORCPT ); Wed, 19 Sep 2018 03:46:45 -0400 Received: by mail-pg1-f194.google.com with SMTP id i190-v6so1939874pgc.6 for ; Tue, 18 Sep 2018 19:11:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=SEyuJwkoQ2Dasx4pBI9qW1frw8ZMgAfw1aEB93hmkT4=; b=ME1YLMEcwVW5CvLK61JPV5xkx2Q6Ujl+0UmgSgMkpavx4ePc6fg/ZZfTqH08n4WubU UUfTsyou7rHs4y+JV8WUHjcdBFtHgNYwZt3Etcp7lD9nTzqgzoN5SOnNy56BplViDW+V bVGUUVjTRnQrPV6xNOKyKJ2fOT/e9SVMtleUM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=SEyuJwkoQ2Dasx4pBI9qW1frw8ZMgAfw1aEB93hmkT4=; b=GlfwkmgZKGbURLIowtSrHncyUiQGTPEHm5+rKFnK+17Hiu0z8N5Yr3LCdjJ6Yj3KfA pRwSGh5dmke/NqVQeHmx41HjJqLYk4lCc5Sw2I/5p5dKb11PHhqX7F3nyfSg2SGr+JCD iv64tRjJNgmfcptpAV3nGHFiIDCQTN0cWeydgEgmtzfE4Twb6qQxJ6YHo4E86JjzWdPO pnEPgS+PmgWctwiMUPO2JC6mxsK4o2DM8TP5NiH6FuCU1WeX5y3ROa6KN3aMYb+vX7ez pRdB+vtx0mtnSgGwTSRpaMe+cKy/AxMbuxBiHvbNS4GE8zKTAayrdutC5cwx5A5jcE7i 6/kw== X-Gm-Message-State: APzg51DC043Dl7QJB2HClmZ2+squD0wFX9A4ShiTWwzMxzAU0UU0Wizz PdSjlqNq3bdzz5RlPvqT63zDwg== X-Google-Smtp-Source: ANB0VdYxsl+CFRim8nKheXmwe65x+ntbflgNz7j3EqkNewUXcAGWTouDggpJUCbxEXw2ScaJjgi8sQ== X-Received: by 2002:a62:2e02:: with SMTP id u2-v6mr33896127pfu.134.1537323077349; Tue, 18 Sep 2018 19:11:17 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id r25-v6sm23107039pgm.59.2018.09.18.19.11.09 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:11:10 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , Tom Lendacky , Gary Hook , Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 12/23] crypto: ccp - Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:49 -0700 Message-Id: <20180919021100.3380-13-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: Tom Lendacky Cc: Gary Hook Signed-off-by: Kees Cook --- drivers/crypto/ccp/ccp-crypto-aes-xts.c | 13 +++++++------ drivers/crypto/ccp/ccp-crypto.h | 2 +- 2 files changed, 8 insertions(+), 7 deletions(-) diff --git a/drivers/crypto/ccp/ccp-crypto-aes-xts.c b/drivers/crypto/ccp/ccp-crypto-aes-xts.c index 94b5bcf5b628..ca4630b8395f 100644 --- a/drivers/crypto/ccp/ccp-crypto-aes-xts.c +++ b/drivers/crypto/ccp/ccp-crypto-aes-xts.c @@ -102,7 +102,7 @@ static int ccp_aes_xts_setkey(struct crypto_ablkcipher *tfm, const u8 *key, ctx->u.aes.key_len = key_len / 2; sg_init_one(&ctx->u.aes.key_sg, ctx->u.aes.key, key_len); - return crypto_skcipher_setkey(ctx->u.aes.tfm_skcipher, key, key_len); + return crypto_sync_skcipher_setkey(ctx->u.aes.tfm_skcipher, key, key_len); } static int ccp_aes_xts_crypt(struct ablkcipher_request *req, @@ -151,12 +151,13 @@ static int ccp_aes_xts_crypt(struct ablkcipher_request *req, (ctx->u.aes.key_len != AES_KEYSIZE_256)) fallback = 1; if (fallback) { - SKCIPHER_REQUEST_ON_STACK(subreq, ctx->u.aes.tfm_skcipher); + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, + ctx->u.aes.tfm_skcipher); /* Use the fallback to process the request for any * unsupported unit sizes or key sizes */ - skcipher_request_set_tfm(subreq, ctx->u.aes.tfm_skcipher); + skcipher_request_set_sync_tfm(subreq, ctx->u.aes.tfm_skcipher); skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, @@ -203,12 +204,12 @@ static int ccp_aes_xts_decrypt(struct ablkcipher_request *req) static int ccp_aes_xts_cra_init(struct crypto_tfm *tfm) { struct ccp_ctx *ctx = crypto_tfm_ctx(tfm); - struct crypto_skcipher *fallback_tfm; + struct crypto_sync_skcipher *fallback_tfm; ctx->complete = ccp_aes_xts_complete; ctx->u.aes.key_len = 0; - fallback_tfm = crypto_alloc_skcipher("xts(aes)", 0, + fallback_tfm = crypto_alloc_sync_skcipher("xts(aes)", 0, CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(fallback_tfm)) { @@ -226,7 +227,7 @@ static void ccp_aes_xts_cra_exit(struct crypto_tfm *tfm) { struct ccp_ctx *ctx = crypto_tfm_ctx(tfm); - crypto_free_skcipher(ctx->u.aes.tfm_skcipher); + crypto_free_sync_skcipher(ctx->u.aes.tfm_skcipher); } static int ccp_register_aes_xts_alg(struct list_head *head, diff --git a/drivers/crypto/ccp/ccp-crypto.h b/drivers/crypto/ccp/ccp-crypto.h index b9fd090c46c2..28819e11db96 100644 --- a/drivers/crypto/ccp/ccp-crypto.h +++ b/drivers/crypto/ccp/ccp-crypto.h @@ -88,7 +88,7 @@ static inline struct ccp_crypto_ahash_alg * /***** AES related defines *****/ struct ccp_aes_ctx { /* Fallback cipher for XTS with unsupported unit sizes */ - struct crypto_skcipher *tfm_skcipher; + struct crypto_sync_skcipher *tfm_skcipher; /* Cipher used to generate CMAC K1/K2 keys */ struct crypto_cipher *tfm_cipher; From patchwork Wed Sep 19 02:10:50 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605151 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6C83F17E0 for ; Wed, 19 Sep 2018 02:12:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 554872C0CA for ; Wed, 19 Sep 2018 02:12:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 530272BF67; Wed, 19 Sep 2018 02:12:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6A7E02C0D4 for ; Wed, 19 Sep 2018 02:11:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730779AbeISHrX (ORCPT ); Wed, 19 Sep 2018 03:47:23 -0400 Received: from mail-pg1-f195.google.com ([209.85.215.195]:45538 "EHLO mail-pg1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730905AbeISHqr (ORCPT ); Wed, 19 Sep 2018 03:46:47 -0400 Received: by mail-pg1-f195.google.com with SMTP id x186-v6so565916pgd.12 for ; Tue, 18 Sep 2018 19:11:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=2qfRxJpua4aeq0G3/0ZgD5G+YtDBNQl7hzf7l+aYl6c=; b=gVZZ+na1p6a96r3OUGbK1/uYrj42+tQQOkVkh8xhyCZvD1eYL5qHs0qeGE7UThAcbp CFCF4BJonm9RSS3zgOkF8zPcmygPd02FOHcKLJKfCtex2s1MJf+fQsmKbL/YYiKI7xnl GXLC/wDwiVD+gzVv6+7vs1+5uVJjsjfZFiCgo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=2qfRxJpua4aeq0G3/0ZgD5G+YtDBNQl7hzf7l+aYl6c=; b=dPTX56/5In77pmfpcfkbzwam6UWxdLm+ILW/lsTO2nx9/uNHW4nBGNQ+fN4EF5F3Ft BvHj7Q+7zy7FSQpvoFYB1XO71sXEsQm2EX6v8nCx7igMS82Rf2qttschkKwocvkxIttc IZe12lqY50qNy6rSUFnuhT1glAcZS4CyOsGTkktKgQCcED//oZA3C9zipaZDQLeeV/nw pvpkxFoQKV9xBMYdOHgCbUBnCrNDJKHc4+LeKTh7xRq1a7OpYj3N0p0facHN5j0Ym0FI 6jnTbwF25K4S0WinXb79xEw0PVlCBRC6OwSaLcFIXdfyLfWdkAZoFIx+5e0ipNmNqtBo k5ag== X-Gm-Message-State: APzg51CcyCEBREpXS5xbwCBr4MQCufGttYSvTD0ZRD2c+XFrc/lQXfwp 3UmjXZZQT5eZYnN69rWxYz+1ig== X-Google-Smtp-Source: ANB0Vdarif3gFOtHEDFxr0mRbO/yD/7QQmomdH8M7LfMMW4CYuU3D9ry8Ux/FyQgPQgd02EFoOev1A== X-Received: by 2002:a62:ad9:: with SMTP id 86-v6mr33528849pfk.57.1537323079781; Tue, 18 Sep 2018 19:11:19 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id q80-v6sm28324577pfd.15.2018.09.18.19.11.10 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:11:17 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , "Leonidas S. Barbosa" , Paulo Flabiano Smorigo , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 13/23] crypto: vmx - Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:50 -0700 Message-Id: <20180919021100.3380-14-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: "Leonidas S. Barbosa" Cc: Paulo Flabiano Smorigo Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Michael Ellerman Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Kees Cook --- drivers/crypto/vmx/aes_cbc.c | 22 +++++++++++----------- drivers/crypto/vmx/aes_ctr.c | 18 +++++++++--------- drivers/crypto/vmx/aes_xts.c | 18 +++++++++--------- 3 files changed, 29 insertions(+), 29 deletions(-) diff --git a/drivers/crypto/vmx/aes_cbc.c b/drivers/crypto/vmx/aes_cbc.c index b71895871be3..c5c5ff82b52e 100644 --- a/drivers/crypto/vmx/aes_cbc.c +++ b/drivers/crypto/vmx/aes_cbc.c @@ -32,7 +32,7 @@ #include "aesp8-ppc.h" struct p8_aes_cbc_ctx { - struct crypto_skcipher *fallback; + struct crypto_sync_skcipher *fallback; struct aes_key enc_key; struct aes_key dec_key; }; @@ -40,11 +40,11 @@ struct p8_aes_cbc_ctx { static int p8_aes_cbc_init(struct crypto_tfm *tfm) { const char *alg = crypto_tfm_alg_name(tfm); - struct crypto_skcipher *fallback; + struct crypto_sync_skcipher *fallback; struct p8_aes_cbc_ctx *ctx = crypto_tfm_ctx(tfm); - fallback = crypto_alloc_skcipher(alg, 0, - CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK); + fallback = crypto_alloc_sync_skcipher(alg, 0, + CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(fallback)) { printk(KERN_ERR @@ -53,7 +53,7 @@ static int p8_aes_cbc_init(struct crypto_tfm *tfm) return PTR_ERR(fallback); } - crypto_skcipher_set_flags( + crypto_sync_skcipher_set_flags( fallback, crypto_skcipher_get_flags((struct crypto_skcipher *)tfm)); ctx->fallback = fallback; @@ -66,7 +66,7 @@ static void p8_aes_cbc_exit(struct crypto_tfm *tfm) struct p8_aes_cbc_ctx *ctx = crypto_tfm_ctx(tfm); if (ctx->fallback) { - crypto_free_skcipher(ctx->fallback); + crypto_free_sync_skcipher(ctx->fallback); ctx->fallback = NULL; } } @@ -86,7 +86,7 @@ static int p8_aes_cbc_setkey(struct crypto_tfm *tfm, const u8 *key, pagefault_enable(); preempt_enable(); - ret += crypto_skcipher_setkey(ctx->fallback, key, keylen); + ret += crypto_sync_skcipher_setkey(ctx->fallback, key, keylen); return ret; } @@ -100,8 +100,8 @@ static int p8_aes_cbc_encrypt(struct blkcipher_desc *desc, crypto_tfm_ctx(crypto_blkcipher_tfm(desc->tfm)); if (in_interrupt()) { - SKCIPHER_REQUEST_ON_STACK(req, ctx->fallback); - skcipher_request_set_tfm(req, ctx->fallback); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, ctx->fallback); + skcipher_request_set_sync_tfm(req, ctx->fallback); skcipher_request_set_callback(req, desc->flags, NULL, NULL); skcipher_request_set_crypt(req, src, dst, nbytes, desc->info); ret = crypto_skcipher_encrypt(req); @@ -139,8 +139,8 @@ static int p8_aes_cbc_decrypt(struct blkcipher_desc *desc, crypto_tfm_ctx(crypto_blkcipher_tfm(desc->tfm)); if (in_interrupt()) { - SKCIPHER_REQUEST_ON_STACK(req, ctx->fallback); - skcipher_request_set_tfm(req, ctx->fallback); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, ctx->fallback); + skcipher_request_set_sync_tfm(req, ctx->fallback); skcipher_request_set_callback(req, desc->flags, NULL, NULL); skcipher_request_set_crypt(req, src, dst, nbytes, desc->info); ret = crypto_skcipher_decrypt(req); diff --git a/drivers/crypto/vmx/aes_ctr.c b/drivers/crypto/vmx/aes_ctr.c index cd777c75291d..8a2fe092cb8e 100644 --- a/drivers/crypto/vmx/aes_ctr.c +++ b/drivers/crypto/vmx/aes_ctr.c @@ -32,18 +32,18 @@ #include "aesp8-ppc.h" struct p8_aes_ctr_ctx { - struct crypto_skcipher *fallback; + struct crypto_sync_skcipher *fallback; struct aes_key enc_key; }; static int p8_aes_ctr_init(struct crypto_tfm *tfm) { const char *alg = crypto_tfm_alg_name(tfm); - struct crypto_skcipher *fallback; + struct crypto_sync_skcipher *fallback; struct p8_aes_ctr_ctx *ctx = crypto_tfm_ctx(tfm); - fallback = crypto_alloc_skcipher(alg, 0, - CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK); + fallback = crypto_alloc_sync_skcipher(alg, 0, + CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(fallback)) { printk(KERN_ERR "Failed to allocate transformation for '%s': %ld\n", @@ -51,7 +51,7 @@ static int p8_aes_ctr_init(struct crypto_tfm *tfm) return PTR_ERR(fallback); } - crypto_skcipher_set_flags( + crypto_sync_skcipher_set_flags( fallback, crypto_skcipher_get_flags((struct crypto_skcipher *)tfm)); ctx->fallback = fallback; @@ -64,7 +64,7 @@ static void p8_aes_ctr_exit(struct crypto_tfm *tfm) struct p8_aes_ctr_ctx *ctx = crypto_tfm_ctx(tfm); if (ctx->fallback) { - crypto_free_skcipher(ctx->fallback); + crypto_free_sync_skcipher(ctx->fallback); ctx->fallback = NULL; } } @@ -83,7 +83,7 @@ static int p8_aes_ctr_setkey(struct crypto_tfm *tfm, const u8 *key, pagefault_enable(); preempt_enable(); - ret += crypto_skcipher_setkey(ctx->fallback, key, keylen); + ret += crypto_sync_skcipher_setkey(ctx->fallback, key, keylen); return ret; } @@ -119,8 +119,8 @@ static int p8_aes_ctr_crypt(struct blkcipher_desc *desc, crypto_tfm_ctx(crypto_blkcipher_tfm(desc->tfm)); if (in_interrupt()) { - SKCIPHER_REQUEST_ON_STACK(req, ctx->fallback); - skcipher_request_set_tfm(req, ctx->fallback); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, ctx->fallback); + skcipher_request_set_sync_tfm(req, ctx->fallback); skcipher_request_set_callback(req, desc->flags, NULL, NULL); skcipher_request_set_crypt(req, src, dst, nbytes, desc->info); ret = crypto_skcipher_encrypt(req); diff --git a/drivers/crypto/vmx/aes_xts.c b/drivers/crypto/vmx/aes_xts.c index e9954a7d4694..ecd64e5cc5bb 100644 --- a/drivers/crypto/vmx/aes_xts.c +++ b/drivers/crypto/vmx/aes_xts.c @@ -33,7 +33,7 @@ #include "aesp8-ppc.h" struct p8_aes_xts_ctx { - struct crypto_skcipher *fallback; + struct crypto_sync_skcipher *fallback; struct aes_key enc_key; struct aes_key dec_key; struct aes_key tweak_key; @@ -42,11 +42,11 @@ struct p8_aes_xts_ctx { static int p8_aes_xts_init(struct crypto_tfm *tfm) { const char *alg = crypto_tfm_alg_name(tfm); - struct crypto_skcipher *fallback; + struct crypto_sync_skcipher *fallback; struct p8_aes_xts_ctx *ctx = crypto_tfm_ctx(tfm); - fallback = crypto_alloc_skcipher(alg, 0, - CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK); + fallback = crypto_alloc_sync_skcipher(alg, 0, + CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(fallback)) { printk(KERN_ERR "Failed to allocate transformation for '%s': %ld\n", @@ -54,7 +54,7 @@ static int p8_aes_xts_init(struct crypto_tfm *tfm) return PTR_ERR(fallback); } - crypto_skcipher_set_flags( + crypto_sync_skcipher_set_flags( fallback, crypto_skcipher_get_flags((struct crypto_skcipher *)tfm)); ctx->fallback = fallback; @@ -67,7 +67,7 @@ static void p8_aes_xts_exit(struct crypto_tfm *tfm) struct p8_aes_xts_ctx *ctx = crypto_tfm_ctx(tfm); if (ctx->fallback) { - crypto_free_skcipher(ctx->fallback); + crypto_free_sync_skcipher(ctx->fallback); ctx->fallback = NULL; } } @@ -92,7 +92,7 @@ static int p8_aes_xts_setkey(struct crypto_tfm *tfm, const u8 *key, pagefault_enable(); preempt_enable(); - ret += crypto_skcipher_setkey(ctx->fallback, key, keylen); + ret += crypto_sync_skcipher_setkey(ctx->fallback, key, keylen); return ret; } @@ -109,8 +109,8 @@ static int p8_aes_xts_crypt(struct blkcipher_desc *desc, crypto_tfm_ctx(crypto_blkcipher_tfm(desc->tfm)); if (in_interrupt()) { - SKCIPHER_REQUEST_ON_STACK(req, ctx->fallback); - skcipher_request_set_tfm(req, ctx->fallback); + SYNC_SKCIPHER_REQUEST_ON_STACK(req, ctx->fallback); + skcipher_request_set_sync_tfm(req, ctx->fallback); skcipher_request_set_callback(req, desc->flags, NULL, NULL); skcipher_request_set_crypt(req, src, dst, nbytes, desc->info); ret = enc? crypto_skcipher_encrypt(req) : crypto_skcipher_decrypt(req); From patchwork Wed Sep 19 02:10:51 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605157 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CD2F213AD for ; Wed, 19 Sep 2018 02:12:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BB9922C0BD for ; Wed, 19 Sep 2018 02:12:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B9AB42C085; Wed, 19 Sep 2018 02:12:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8D3DB2C071 for ; Wed, 19 Sep 2018 02:12:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730829AbeISHrh (ORCPT ); Wed, 19 Sep 2018 03:47:37 -0400 Received: from mail-pl1-f195.google.com ([209.85.214.195]:44635 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730964AbeISHqp (ORCPT ); Wed, 19 Sep 2018 03:46:45 -0400 Received: by mail-pl1-f195.google.com with SMTP id ba4-v6so1855178plb.11 for ; Tue, 18 Sep 2018 19:11:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=uUD2Xk/3An5Aa6Ik6vjGD89CMWst4AF7yOvsmdKgrlY=; b=SkRE5MpZeaObfX7XKuI6GQh6k3fpEscQDmF4VyK5vWnOiuAT8SFa+hOSg5bmD9deLf 0w+u88GqdaFCKsYmhEtbrCgFgn84Z72SfMYhN9PYyFHIS5MxS0ttqbkBV/a4mhlGq7Fa 5zUEr/htN+C2upC6z0aFdeXoGzEUHlr7b/9Aw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=uUD2Xk/3An5Aa6Ik6vjGD89CMWst4AF7yOvsmdKgrlY=; b=B4mPj+W8sJrtbcyTynI0VbB0dXTDFd4j46rdRDcqClOglD+q0RJ6NYKWnaKypNqC9n PRhFqajvn0c7Y/bCAJAZ6YVIAiwd/o7ROFpDMhODZZ7e7JWRP/NKg/tjoMmDAKusNqAC IIVhc750M/chCHJdvcsIP428qlSRLVPz1TILDj9EudgVOMxT0pDTykCvW9/Kkn+lMFmB Zl65SbpdDgt7Hn6x4eWtn/TUFfL78vaSEzVdcriByNZevO4cQkbIf2fBslM0Ucovx24v G/V2YgtyWdCBO3pLtV7YyIiQ7CwQxjciV28WVBLij0F720r5jXqkwaSP1/UXe1K477de 2OXw== X-Gm-Message-State: APzg51CeB6ZzN+q7t4bkgK1/p57xx2TNXO7vwf/PjnYyUoT+YWGN6asR HuyBhoDO0eF67xGzP/zlFh1kuw== X-Google-Smtp-Source: ANB0Vdago1IK+DnJ9tUj0kObdN6AND8Y9OyxKc/DZaj6rYTUzCHJ/QG7BAwfZWj3pekoTmm1Axgn3w== X-Received: by 2002:a17:902:d694:: with SMTP id v20-v6mr31483212ply.328.1537323078055; Tue, 18 Sep 2018 19:11:18 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id v23-v6sm24687234pfm.80.2018.09.18.19.11.10 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:11:17 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 14/23] crypto: null - Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:51 -0700 Message-Id: <20180919021100.3380-15-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook --- crypto/algif_aead.c | 12 ++++++------ crypto/authenc.c | 8 ++++---- crypto/authencesn.c | 8 ++++---- crypto/crypto_null.c | 11 +++++------ crypto/echainiv.c | 4 ++-- crypto/gcm.c | 8 ++++---- crypto/seqiv.c | 4 ++-- include/crypto/internal/geniv.h | 2 +- include/crypto/null.h | 2 +- 9 files changed, 29 insertions(+), 30 deletions(-) diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c index c40a8c7ee8ae..eb100a04ce9f 100644 --- a/crypto/algif_aead.c +++ b/crypto/algif_aead.c @@ -42,7 +42,7 @@ struct aead_tfm { struct crypto_aead *aead; - struct crypto_skcipher *null_tfm; + struct crypto_sync_skcipher *null_tfm; }; static inline bool aead_sufficient_data(struct sock *sk) @@ -75,13 +75,13 @@ static int aead_sendmsg(struct socket *sock, struct msghdr *msg, size_t size) return af_alg_sendmsg(sock, msg, size, ivsize); } -static int crypto_aead_copy_sgl(struct crypto_skcipher *null_tfm, +static int crypto_aead_copy_sgl(struct crypto_sync_skcipher *null_tfm, struct scatterlist *src, struct scatterlist *dst, unsigned int len) { - SKCIPHER_REQUEST_ON_STACK(skreq, null_tfm); + SYNC_SKCIPHER_REQUEST_ON_STACK(skreq, null_tfm); - skcipher_request_set_tfm(skreq, null_tfm); + skcipher_request_set_sync_tfm(skreq, null_tfm); skcipher_request_set_callback(skreq, CRYPTO_TFM_REQ_MAY_BACKLOG, NULL, NULL); skcipher_request_set_crypt(skreq, src, dst, len, NULL); @@ -99,7 +99,7 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, struct af_alg_ctx *ctx = ask->private; struct aead_tfm *aeadc = pask->private; struct crypto_aead *tfm = aeadc->aead; - struct crypto_skcipher *null_tfm = aeadc->null_tfm; + struct crypto_sync_skcipher *null_tfm = aeadc->null_tfm; unsigned int i, as = crypto_aead_authsize(tfm); struct af_alg_async_req *areq; struct af_alg_tsgl *tsgl, *tmp; @@ -478,7 +478,7 @@ static void *aead_bind(const char *name, u32 type, u32 mask) { struct aead_tfm *tfm; struct crypto_aead *aead; - struct crypto_skcipher *null_tfm; + struct crypto_sync_skcipher *null_tfm; tfm = kzalloc(sizeof(*tfm), GFP_KERNEL); if (!tfm) diff --git a/crypto/authenc.c b/crypto/authenc.c index 4fa8d40d947b..37f54d1b2f66 100644 --- a/crypto/authenc.c +++ b/crypto/authenc.c @@ -33,7 +33,7 @@ struct authenc_instance_ctx { struct crypto_authenc_ctx { struct crypto_ahash *auth; struct crypto_skcipher *enc; - struct crypto_skcipher *null; + struct crypto_sync_skcipher *null; }; struct authenc_request_ctx { @@ -185,9 +185,9 @@ static int crypto_authenc_copy_assoc(struct aead_request *req) { struct crypto_aead *authenc = crypto_aead_reqtfm(req); struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc); - SKCIPHER_REQUEST_ON_STACK(skreq, ctx->null); + SYNC_SKCIPHER_REQUEST_ON_STACK(skreq, ctx->null); - skcipher_request_set_tfm(skreq, ctx->null); + skcipher_request_set_sync_tfm(skreq, ctx->null); skcipher_request_set_callback(skreq, aead_request_flags(req), NULL, NULL); skcipher_request_set_crypt(skreq, req->src, req->dst, req->assoclen, @@ -318,7 +318,7 @@ static int crypto_authenc_init_tfm(struct crypto_aead *tfm) struct crypto_authenc_ctx *ctx = crypto_aead_ctx(tfm); struct crypto_ahash *auth; struct crypto_skcipher *enc; - struct crypto_skcipher *null; + struct crypto_sync_skcipher *null; int err; auth = crypto_spawn_ahash(&ictx->auth); diff --git a/crypto/authencesn.c b/crypto/authencesn.c index 50b804747e20..80a25cc04aec 100644 --- a/crypto/authencesn.c +++ b/crypto/authencesn.c @@ -36,7 +36,7 @@ struct crypto_authenc_esn_ctx { unsigned int reqoff; struct crypto_ahash *auth; struct crypto_skcipher *enc; - struct crypto_skcipher *null; + struct crypto_sync_skcipher *null; }; struct authenc_esn_request_ctx { @@ -183,9 +183,9 @@ static int crypto_authenc_esn_copy(struct aead_request *req, unsigned int len) { struct crypto_aead *authenc_esn = crypto_aead_reqtfm(req); struct crypto_authenc_esn_ctx *ctx = crypto_aead_ctx(authenc_esn); - SKCIPHER_REQUEST_ON_STACK(skreq, ctx->null); + SYNC_SKCIPHER_REQUEST_ON_STACK(skreq, ctx->null); - skcipher_request_set_tfm(skreq, ctx->null); + skcipher_request_set_sync_tfm(skreq, ctx->null); skcipher_request_set_callback(skreq, aead_request_flags(req), NULL, NULL); skcipher_request_set_crypt(skreq, req->src, req->dst, len, NULL); @@ -341,7 +341,7 @@ static int crypto_authenc_esn_init_tfm(struct crypto_aead *tfm) struct crypto_authenc_esn_ctx *ctx = crypto_aead_ctx(tfm); struct crypto_ahash *auth; struct crypto_skcipher *enc; - struct crypto_skcipher *null; + struct crypto_sync_skcipher *null; int err; auth = crypto_spawn_ahash(&ictx->auth); diff --git a/crypto/crypto_null.c b/crypto/crypto_null.c index 0959b268966c..0bae59922a80 100644 --- a/crypto/crypto_null.c +++ b/crypto/crypto_null.c @@ -26,7 +26,7 @@ #include static DEFINE_MUTEX(crypto_default_null_skcipher_lock); -static struct crypto_skcipher *crypto_default_null_skcipher; +static struct crypto_sync_skcipher *crypto_default_null_skcipher; static int crypto_default_null_skcipher_refcnt; static int null_compress(struct crypto_tfm *tfm, const u8 *src, @@ -152,16 +152,15 @@ MODULE_ALIAS_CRYPTO("compress_null"); MODULE_ALIAS_CRYPTO("digest_null"); MODULE_ALIAS_CRYPTO("cipher_null"); -struct crypto_skcipher *crypto_get_default_null_skcipher(void) +struct crypto_sync_skcipher *crypto_get_default_null_skcipher(void) { - struct crypto_skcipher *tfm; + struct crypto_sync_skcipher *tfm; mutex_lock(&crypto_default_null_skcipher_lock); tfm = crypto_default_null_skcipher; if (!tfm) { - tfm = crypto_alloc_skcipher("ecb(cipher_null)", - 0, CRYPTO_ALG_ASYNC); + tfm = crypto_alloc_sync_skcipher("ecb(cipher_null)", 0, 0); if (IS_ERR(tfm)) goto unlock; @@ -181,7 +180,7 @@ void crypto_put_default_null_skcipher(void) { mutex_lock(&crypto_default_null_skcipher_lock); if (!--crypto_default_null_skcipher_refcnt) { - crypto_free_skcipher(crypto_default_null_skcipher); + crypto_free_sync_skcipher(crypto_default_null_skcipher); crypto_default_null_skcipher = NULL; } mutex_unlock(&crypto_default_null_skcipher_lock); diff --git a/crypto/echainiv.c b/crypto/echainiv.c index 45819e6015bf..77e607fdbfb7 100644 --- a/crypto/echainiv.c +++ b/crypto/echainiv.c @@ -47,9 +47,9 @@ static int echainiv_encrypt(struct aead_request *req) info = req->iv; if (req->src != req->dst) { - SKCIPHER_REQUEST_ON_STACK(nreq, ctx->sknull); + SYNC_SKCIPHER_REQUEST_ON_STACK(nreq, ctx->sknull); - skcipher_request_set_tfm(nreq, ctx->sknull); + skcipher_request_set_sync_tfm(nreq, ctx->sknull); skcipher_request_set_callback(nreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(nreq, req->src, req->dst, diff --git a/crypto/gcm.c b/crypto/gcm.c index 0ad879e1f9b2..e438492db2ca 100644 --- a/crypto/gcm.c +++ b/crypto/gcm.c @@ -50,7 +50,7 @@ struct crypto_rfc4543_instance_ctx { struct crypto_rfc4543_ctx { struct crypto_aead *child; - struct crypto_skcipher *null; + struct crypto_sync_skcipher *null; u8 nonce[4]; }; @@ -1067,9 +1067,9 @@ static int crypto_rfc4543_copy_src_to_dst(struct aead_request *req, bool enc) unsigned int authsize = crypto_aead_authsize(aead); unsigned int nbytes = req->assoclen + req->cryptlen - (enc ? 0 : authsize); - SKCIPHER_REQUEST_ON_STACK(nreq, ctx->null); + SYNC_SKCIPHER_REQUEST_ON_STACK(nreq, ctx->null); - skcipher_request_set_tfm(nreq, ctx->null); + skcipher_request_set_sync_tfm(nreq, ctx->null); skcipher_request_set_callback(nreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(nreq, req->src, req->dst, nbytes, NULL); @@ -1093,7 +1093,7 @@ static int crypto_rfc4543_init_tfm(struct crypto_aead *tfm) struct crypto_aead_spawn *spawn = &ictx->aead; struct crypto_rfc4543_ctx *ctx = crypto_aead_ctx(tfm); struct crypto_aead *aead; - struct crypto_skcipher *null; + struct crypto_sync_skcipher *null; unsigned long align; int err = 0; diff --git a/crypto/seqiv.c b/crypto/seqiv.c index 39dbf2f7e5f5..64a412be255e 100644 --- a/crypto/seqiv.c +++ b/crypto/seqiv.c @@ -73,9 +73,9 @@ static int seqiv_aead_encrypt(struct aead_request *req) info = req->iv; if (req->src != req->dst) { - SKCIPHER_REQUEST_ON_STACK(nreq, ctx->sknull); + SYNC_SKCIPHER_REQUEST_ON_STACK(nreq, ctx->sknull); - skcipher_request_set_tfm(nreq, ctx->sknull); + skcipher_request_set_sync_tfm(nreq, ctx->sknull); skcipher_request_set_callback(nreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(nreq, req->src, req->dst, diff --git a/include/crypto/internal/geniv.h b/include/crypto/internal/geniv.h index 2bcfb931bc5b..71be24cd59bd 100644 --- a/include/crypto/internal/geniv.h +++ b/include/crypto/internal/geniv.h @@ -20,7 +20,7 @@ struct aead_geniv_ctx { spinlock_t lock; struct crypto_aead *child; - struct crypto_skcipher *sknull; + struct crypto_sync_skcipher *sknull; u8 salt[] __attribute__ ((aligned(__alignof__(u32)))); }; diff --git a/include/crypto/null.h b/include/crypto/null.h index 15aeef6e30ef..0ef577cc00e3 100644 --- a/include/crypto/null.h +++ b/include/crypto/null.h @@ -9,7 +9,7 @@ #define NULL_DIGEST_SIZE 0 #define NULL_IV_SIZE 0 -struct crypto_skcipher *crypto_get_default_null_skcipher(void); +struct crypto_sync_skcipher *crypto_get_default_null_skcipher(void); void crypto_put_default_null_skcipher(void); #endif From patchwork Wed Sep 19 02:10:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605143 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 745B913AD for ; Wed, 19 Sep 2018 02:11:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 611C32C082 for ; Wed, 19 Sep 2018 02:11:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5E88C2C085; Wed, 19 Sep 2018 02:11:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DE8C92C09E for ; Wed, 19 Sep 2018 02:11:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731018AbeISHqt (ORCPT ); Wed, 19 Sep 2018 03:46:49 -0400 Received: from mail-pl1-f193.google.com ([209.85.214.193]:40728 "EHLO mail-pl1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730962AbeISHqs (ORCPT ); Wed, 19 Sep 2018 03:46:48 -0400 Received: by mail-pl1-f193.google.com with SMTP id s17-v6so1863410plp.7 for ; Tue, 18 Sep 2018 19:11:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=kx1QnHesxnHwVj0JBlqIb+29TwWzoMEIZdMibRYpRYI=; b=iQU8UscCxmOIiFMdP5x2fdKDlLaVpw6lAn+XjeboyGHTs9LxgemVhMHjr77sHRrOJp zdyYqxubwdfyNv3s3h3nu3fx7cZZy+QoLv6W90wBnoqpo1V+tzC4/l7vktYC0FqfGVTC fGkQtbAXynbOKtpLjt1pFAI3SN2feeLsfo1cs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=kx1QnHesxnHwVj0JBlqIb+29TwWzoMEIZdMibRYpRYI=; b=NmMkvNIlqiZ/GqgscqVBVHHDTj/tVbptQK5suXS7eBEAn4lqTd5aVgNDinMieurmVB mLW6uXsNLaxWVqTP3AvKNOt6oGUaAVqEmv2H1zWWuRbD6rKP0mfRrezQpDroJ4CiI2lr jVqdsyMkpCa7SPN4bDupY7TCU9Lsp8hzKHwEnFmiFx23x2DqBu18DGgye1gj14yRiYeZ z2OrRs/8iBeWuK9Uwta64ZOuu1Jp0KYmMoZTI0jkb2v4SSaiW60RZgbGe6KqyW0wW+7z wy5dmu1N3kxn9ujVkQ7WksvJI7fw2OeYdT4sClDX8N0nQaxq5+mGfPD6Ul7hpAJN5Gge +9ew== X-Gm-Message-State: APzg51BoBqf/xp1qS+kxpf4WvUtA/yerHycEph9uRM3qH/ecnbX0Gsx8 hV2uqFueIRtP9MGMDDfjH+N3XQ== X-Google-Smtp-Source: ANB0VdYT2f7vYpCaO4MaT22UZEMLcJme2L+YXkPKEUaVJOjDXsUJUeyuDpO1S05h6j1ahwOt7T5tjQ== X-Received: by 2002:a17:902:bc43:: with SMTP id t3-v6mr18798253plz.199.1537323080686; Tue, 18 Sep 2018 19:11:20 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id f11-v6sm27456041pfa.131.2018.09.18.19.11.12 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:11:17 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 15/23] crypto: cryptd - Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:52 -0700 Message-Id: <20180919021100.3380-16-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook --- crypto/cryptd.c | 32 +++++++++++++++++--------------- 1 file changed, 17 insertions(+), 15 deletions(-) diff --git a/crypto/cryptd.c b/crypto/cryptd.c index addca7bae33f..7118fb5efbaa 100644 --- a/crypto/cryptd.c +++ b/crypto/cryptd.c @@ -76,7 +76,7 @@ struct cryptd_blkcipher_request_ctx { struct cryptd_skcipher_ctx { atomic_t refcnt; - struct crypto_skcipher *child; + struct crypto_sync_skcipher *child; }; struct cryptd_skcipher_request_ctx { @@ -449,14 +449,16 @@ static int cryptd_skcipher_setkey(struct crypto_skcipher *parent, const u8 *key, unsigned int keylen) { struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(parent); - struct crypto_skcipher *child = ctx->child; + struct crypto_sync_skcipher *child = ctx->child; int err; - crypto_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK); - crypto_skcipher_set_flags(child, crypto_skcipher_get_flags(parent) & + crypto_sync_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK); + crypto_sync_skcipher_set_flags(child, + crypto_skcipher_get_flags(parent) & CRYPTO_TFM_REQ_MASK); - err = crypto_skcipher_setkey(child, key, keylen); - crypto_skcipher_set_flags(parent, crypto_skcipher_get_flags(child) & + err = crypto_sync_skcipher_setkey(child, key, keylen); + crypto_skcipher_set_flags(parent, + crypto_sync_skcipher_get_flags(child) & CRYPTO_TFM_RES_MASK); return err; } @@ -483,13 +485,13 @@ static void cryptd_skcipher_encrypt(struct crypto_async_request *base, struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req); struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); - struct crypto_skcipher *child = ctx->child; - SKCIPHER_REQUEST_ON_STACK(subreq, child); + struct crypto_sync_skcipher *child = ctx->child; + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, child); if (unlikely(err == -EINPROGRESS)) goto out; - skcipher_request_set_tfm(subreq, child); + skcipher_request_set_sync_tfm(subreq, child); skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, @@ -511,13 +513,13 @@ static void cryptd_skcipher_decrypt(struct crypto_async_request *base, struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req); struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); - struct crypto_skcipher *child = ctx->child; - SKCIPHER_REQUEST_ON_STACK(subreq, child); + struct crypto_sync_skcipher *child = ctx->child; + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, child); if (unlikely(err == -EINPROGRESS)) goto out; - skcipher_request_set_tfm(subreq, child); + skcipher_request_set_sync_tfm(subreq, child); skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, @@ -568,7 +570,7 @@ static int cryptd_skcipher_init_tfm(struct crypto_skcipher *tfm) if (IS_ERR(cipher)) return PTR_ERR(cipher); - ctx->child = cipher; + ctx->child = (struct crypto_sync_skcipher *)cipher; crypto_skcipher_set_reqsize( tfm, sizeof(struct cryptd_skcipher_request_ctx)); return 0; @@ -578,7 +580,7 @@ static void cryptd_skcipher_exit_tfm(struct crypto_skcipher *tfm) { struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); - crypto_free_skcipher(ctx->child); + crypto_free_sync_skcipher(ctx->child); } static void cryptd_skcipher_free(struct skcipher_instance *inst) @@ -1243,7 +1245,7 @@ struct crypto_skcipher *cryptd_skcipher_child(struct cryptd_skcipher *tfm) { struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(&tfm->base); - return ctx->child; + return &ctx->child->base; } EXPORT_SYMBOL_GPL(cryptd_skcipher_child); From patchwork Wed Sep 19 02:10:53 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605147 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4297C13AD for ; Wed, 19 Sep 2018 02:11:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 31D8B2BF4E for ; Wed, 19 Sep 2018 02:11:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2F3002C085; Wed, 19 Sep 2018 02:11:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A2AB32BF25 for ; Wed, 19 Sep 2018 02:11:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731022AbeISHqu (ORCPT ); Wed, 19 Sep 2018 03:46:50 -0400 Received: from mail-pl1-f195.google.com ([209.85.214.195]:46673 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731016AbeISHqt (ORCPT ); Wed, 19 Sep 2018 03:46:49 -0400 Received: by mail-pl1-f195.google.com with SMTP id t19-v6so1850862ply.13 for ; Tue, 18 Sep 2018 19:11:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=I2hcMVXGV7OApvUgHaXTiO8YqbUDBC+hO91hCX/pOYw=; b=nWedSD77PbdXbq43SjIv5rQuXu9+93dyqtUH6/E2RiyVzFTAl5iKUDKx1d9d3f9k// EBs2+ESdGv6yui/pPVlJgbM/SkmdHhO3D6FDV1AMgjFOFAzXxA0eAD7sDEjn6h0oz4/E NGqZ/BxSS0uf6ybrKUXv9YHbY/3fiWBNLutSk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=I2hcMVXGV7OApvUgHaXTiO8YqbUDBC+hO91hCX/pOYw=; b=aKF0OhXvyKIPB3j93uo62vBcCNPRU9MzYhx9zTwHS6Q+pnPNSmhKr2av9Hecy+46rz 1ewpZu6zCxWtFBbf5WH0XkihMrk62otIWZnQ7CrB8PoVua22eBXJNFYUYUX1aiy8izD1 NJQYP7ikzrISOgtjAvnM6vxios4Td3C3mBJ4cDnQFrEvU/NOIRzBgrf/ZtZpSckU7fDT zxJlrPJ/ZKPN3o1BZs5T/pJH6BjYkkrbxvlT0ZB4Kznpef6u1CP6ig+fOxsq6cpuvWLX C4F4jPfu1FAwAV60fH7iSCEcsnWoxg8yCdGgGDDH23PyECFKAgtvLcKT4QDiQ00jFp/O PQ4A== X-Gm-Message-State: APzg51Ad9JdtR8igMHK/N06kUj8L4veuWH766vxY5BNhutv97QV/vP3E UYTQODTjq6rXp7E64bSPU2SosQ== X-Google-Smtp-Source: ANB0VdakeZpqA62pIHcjWLmODfeSDbBHnDvggQr5UvRBkeFM5mt43H+cjzIv9vb1GM7ykyOWylVM0w== X-Received: by 2002:a17:902:e20b:: with SMTP id ce11-v6mr32383730plb.136.1537323081652; Tue, 18 Sep 2018 19:11:21 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id f62-v6sm27169215pfg.74.2018.09.18.19.11.12 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:11:17 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 16/23] crypto: sahara - Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:53 -0700 Message-Id: <20180919021100.3380-17-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook --- drivers/crypto/sahara.c | 31 +++++++++++++++---------------- 1 file changed, 15 insertions(+), 16 deletions(-) diff --git a/drivers/crypto/sahara.c b/drivers/crypto/sahara.c index e7540a5b8197..bbf166a97ad3 100644 --- a/drivers/crypto/sahara.c +++ b/drivers/crypto/sahara.c @@ -149,7 +149,7 @@ struct sahara_ctx { /* AES-specific context */ int keylen; u8 key[AES_KEYSIZE_128]; - struct crypto_skcipher *fallback; + struct crypto_sync_skcipher *fallback; }; struct sahara_aes_reqctx { @@ -621,14 +621,14 @@ static int sahara_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, /* * The requested key size is not supported by HW, do a fallback. */ - crypto_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK); - crypto_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags & + crypto_sync_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK); + crypto_sync_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - ret = crypto_skcipher_setkey(ctx->fallback, key, keylen); + ret = crypto_sync_skcipher_setkey(ctx->fallback, key, keylen); tfm->base.crt_flags &= ~CRYPTO_TFM_RES_MASK; - tfm->base.crt_flags |= crypto_skcipher_get_flags(ctx->fallback) & + tfm->base.crt_flags |= crypto_sync_skcipher_get_flags(ctx->fallback) & CRYPTO_TFM_RES_MASK; return ret; } @@ -666,9 +666,9 @@ static int sahara_aes_ecb_encrypt(struct ablkcipher_request *req) int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - skcipher_request_set_tfm(subreq, ctx->fallback); + skcipher_request_set_sync_tfm(subreq, ctx->fallback); skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, @@ -688,9 +688,9 @@ static int sahara_aes_ecb_decrypt(struct ablkcipher_request *req) int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - skcipher_request_set_tfm(subreq, ctx->fallback); + skcipher_request_set_sync_tfm(subreq, ctx->fallback); skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, @@ -710,9 +710,9 @@ static int sahara_aes_cbc_encrypt(struct ablkcipher_request *req) int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - skcipher_request_set_tfm(subreq, ctx->fallback); + skcipher_request_set_sync_tfm(subreq, ctx->fallback); skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, @@ -732,9 +732,9 @@ static int sahara_aes_cbc_decrypt(struct ablkcipher_request *req) int err; if (unlikely(ctx->keylen != AES_KEYSIZE_128)) { - SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - skcipher_request_set_tfm(subreq, ctx->fallback); + skcipher_request_set_sync_tfm(subreq, ctx->fallback); skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, @@ -752,8 +752,7 @@ static int sahara_aes_cra_init(struct crypto_tfm *tfm) const char *name = crypto_tfm_alg_name(tfm); struct sahara_ctx *ctx = crypto_tfm_ctx(tfm); - ctx->fallback = crypto_alloc_skcipher(name, 0, - CRYPTO_ALG_ASYNC | + ctx->fallback = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ctx->fallback)) { pr_err("Error allocating fallback algo %s\n", name); @@ -769,7 +768,7 @@ static void sahara_aes_cra_exit(struct crypto_tfm *tfm) { struct sahara_ctx *ctx = crypto_tfm_ctx(tfm); - crypto_free_skcipher(ctx->fallback); + crypto_free_sync_skcipher(ctx->fallback); } static u32 sahara_sha_init_hdr(struct sahara_dev *dev, From patchwork Wed Sep 19 02:10:54 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605193 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A12ED13AD for ; Wed, 19 Sep 2018 02:19:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8E1912BFEB for ; Wed, 19 Sep 2018 02:19:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 81EC82BFFF; Wed, 19 Sep 2018 02:19:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1B6132BFEB for ; Wed, 19 Sep 2018 02:19:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730845AbeISHyv (ORCPT ); Wed, 19 Sep 2018 03:54:51 -0400 Received: from mail-pg1-f195.google.com ([209.85.215.195]:41309 "EHLO mail-pg1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730841AbeISHyu (ORCPT ); Wed, 19 Sep 2018 03:54:50 -0400 Received: by mail-pg1-f195.google.com with SMTP id s15-v6so1947258pgv.8 for ; Tue, 18 Sep 2018 19:19:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=bwpHaNlRdidXYTSo5B9BGO/P76H7RxTgLjp5X7q/NJw=; b=eVFOScn1P73qT/paJvfd9cUG9+vTYBA285PcZQ1jbgTyZVZA8p1UxdXs0uXk8st4yM Vn7MMWK5OzrVfrHfv/hU4eJ0rKisZKM5pfu2ZUs0RPkP6egvz3kfQUmU1j3PqAAW2M7Z orWRqQyOSItFpWSIUW9F/RcfzPx6WU2QvYmJk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=bwpHaNlRdidXYTSo5B9BGO/P76H7RxTgLjp5X7q/NJw=; b=OnxqfLX97k8g7ZektHSdPcGE+15MTNMO5qXmbHs9BKwDphDPM9l9xFvpTvE9jJcoNt 6magKyAiQrWdYhm/aQ2sREaYRy9urPWQ8STQjhuDt9L73sr0RVNYfsN8u2oP2sYX4kXL OzDHeABDRLyBSfTprBFAP8pR/5JAiR9PfztMdEE++AJ9lai4IAPHrnp+kg+aO4FCNow+ Byh7mZclTWOaursxhg2EfxQH+kcz2P4RgUgepE1RUmKIhKffLweL3vkLnGnZJHmolu47 l9Ky9uYAWGRIIwmnyTbLQNuK/sjoEoYw5fQ6fTgMtj+zkR9QbtKpChsuYQPwlWX/5TG9 PEBQ== X-Gm-Message-State: APzg51CaKLIvPE+U2Le+UGSknnMLWUbQAGjRC476Sl22pJJC4me7Dg2N 6clvRZToXIOZ7y3Qv86aBnVmFQ== X-Google-Smtp-Source: ANB0VdZLxVEmbuy0wn/Y5oCh7SBNZnvgS1bkOOP37xIGXWzYQtEWqu0Oa8oxWyuSc7vmonV3Pya12g== X-Received: by 2002:a63:5204:: with SMTP id g4-v6mr29970572pgb.274.1537323560594; Tue, 18 Sep 2018 19:19:20 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id f75-v6sm35679362pfk.85.2018.09.18.19.19.18 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:19:18 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , Himanshu Jha , Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 17/23] crypto: qce - Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:54 -0700 Message-Id: <20180919021100.3380-18-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: Himanshu Jha Signed-off-by: Kees Cook --- drivers/crypto/qce/ablkcipher.c | 13 ++++++------- drivers/crypto/qce/cipher.h | 2 +- 2 files changed, 7 insertions(+), 8 deletions(-) diff --git a/drivers/crypto/qce/ablkcipher.c b/drivers/crypto/qce/ablkcipher.c index ea4d96bf47e8..585e1cab9ae3 100644 --- a/drivers/crypto/qce/ablkcipher.c +++ b/drivers/crypto/qce/ablkcipher.c @@ -189,7 +189,7 @@ static int qce_ablkcipher_setkey(struct crypto_ablkcipher *ablk, const u8 *key, memcpy(ctx->enc_key, key, keylen); return 0; fallback: - ret = crypto_skcipher_setkey(ctx->fallback, key, keylen); + ret = crypto_sync_skcipher_setkey(ctx->fallback, key, keylen); if (!ret) ctx->enc_keylen = keylen; return ret; @@ -212,9 +212,9 @@ static int qce_ablkcipher_crypt(struct ablkcipher_request *req, int encrypt) if (IS_AES(rctx->flags) && ctx->enc_keylen != AES_KEYSIZE_128 && ctx->enc_keylen != AES_KEYSIZE_256) { - SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - skcipher_request_set_tfm(subreq, ctx->fallback); + skcipher_request_set_sync_tfm(subreq, ctx->fallback); skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, @@ -245,9 +245,8 @@ static int qce_ablkcipher_init(struct crypto_tfm *tfm) memset(ctx, 0, sizeof(*ctx)); tfm->crt_ablkcipher.reqsize = sizeof(struct qce_cipher_reqctx); - ctx->fallback = crypto_alloc_skcipher(crypto_tfm_alg_name(tfm), 0, - CRYPTO_ALG_ASYNC | - CRYPTO_ALG_NEED_FALLBACK); + ctx->fallback = crypto_alloc_sync_skcipher(crypto_tfm_alg_name(tfm), + 0, CRYPTO_ALG_NEED_FALLBACK); return PTR_ERR_OR_ZERO(ctx->fallback); } @@ -255,7 +254,7 @@ static void qce_ablkcipher_exit(struct crypto_tfm *tfm) { struct qce_cipher_ctx *ctx = crypto_tfm_ctx(tfm); - crypto_free_skcipher(ctx->fallback); + crypto_free_sync_skcipher(ctx->fallback); } struct qce_ablkcipher_def { diff --git a/drivers/crypto/qce/cipher.h b/drivers/crypto/qce/cipher.h index 2b0278bb6e92..ee055bfe98a0 100644 --- a/drivers/crypto/qce/cipher.h +++ b/drivers/crypto/qce/cipher.h @@ -22,7 +22,7 @@ struct qce_cipher_ctx { u8 enc_key[QCE_MAX_KEY_SIZE]; unsigned int enc_keylen; - struct crypto_skcipher *fallback; + struct crypto_sync_skcipher *fallback; }; /** From patchwork Wed Sep 19 02:10:55 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605195 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1A35D15A6 for ; Wed, 19 Sep 2018 02:19:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 06DCF2BFEB for ; Wed, 19 Sep 2018 02:19:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EE6D62BFFF; Wed, 19 Sep 2018 02:19:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8D36F2BFEB for ; Wed, 19 Sep 2018 02:19:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730942AbeISHzO (ORCPT ); Wed, 19 Sep 2018 03:55:14 -0400 Received: from mail-pl1-f194.google.com ([209.85.214.194]:45684 "EHLO mail-pl1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730674AbeISHyv (ORCPT ); Wed, 19 Sep 2018 03:54:51 -0400 Received: by mail-pl1-f194.google.com with SMTP id j8-v6so1859616pll.12 for ; Tue, 18 Sep 2018 19:19:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=65XQXoAyD+UTIG+r47sytz0E0OesX27kp0gjZpSVYfU=; b=Xv3EXJvGyC37iHum5DKBf3ml2OW+WVlHUuBKT1HIGjj6Ln6iTe1rUalOMIqO05vQmM 4dTnjltYo3H0iAb7Or43uB0Eh/P9btxkov4dWh9xiSeKHXd60vmjz2Wgb0hgrsQE6l/n YBDGqjk2i16Zbl26NjHXLeE7fSElKnpaA/aCU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=65XQXoAyD+UTIG+r47sytz0E0OesX27kp0gjZpSVYfU=; b=LGLaRK3u5RAMGZZgpAIlAkIj1RJQmLdD9nEzpTftaxyxwiS76Y5BWZnr2FMPWsGFZO Gq/ip7PyOk9hIIO8Y3oJXCJXrd+GiKJTpQJrIVCrdj8jXZF4OTIoNj9fn2fZYbV9FMhM 0b3JjFj8qMChOYrG3g+wC/Ti6e91eX71mdYUHcVWEnxex5VaTRDxIM0zmC1sGhYq0KkK yBpyabK53SewKHgCcyY8ZW1dSLTVRqrtKg8QRUPLGlcd5w3Oz0Ud+kTUOiPHl/x+kdK8 mt8H48Ro/j+ReE4teKQ67UKrFNHExRznbM4RFox7IZrdZe48PY31KMuy39BQEYqTD6bP VXDQ== X-Gm-Message-State: APzg51CjlcMpT2HrZ5DaYM8KSuh/3f/vWwM0suul+85fmX0MKFjd/+5R XJ1D6tnto1jgVJpLVfWJtMljlsalZWA= X-Google-Smtp-Source: ANB0VdaB/BhOOmO5o668mB4XBfuJZChJuLjy2pjxlRb3TNw5dStTNC+vTZqVfFd9LTZjpZnmHYYcKA== X-Received: by 2002:a17:902:8542:: with SMTP id d2-v6mr32506589plo.285.1537323561408; Tue, 18 Sep 2018 19:19:21 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id x82-v6sm35632060pfe.129.2018.09.18.19.19.18 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:19:18 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 18/23] crypto: artpec6 - Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:55 -0700 Message-Id: <20180919021100.3380-19-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: Jesper Nilsson Cc: Lars Persson Cc: linux-arm-kernel@axis.com Signed-off-by: Kees Cook Acked-by: Lars Persson --- drivers/crypto/axis/artpec6_crypto.c | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/drivers/crypto/axis/artpec6_crypto.c b/drivers/crypto/axis/artpec6_crypto.c index 7f07a5085e9b..e5a080e87ea8 100644 --- a/drivers/crypto/axis/artpec6_crypto.c +++ b/drivers/crypto/axis/artpec6_crypto.c @@ -330,7 +330,7 @@ struct artpec6_cryptotfm_context { size_t key_length; u32 key_md; int crypto_type; - struct crypto_skcipher *fallback; + struct crypto_sync_skcipher *fallback; }; struct artpec6_crypto_aead_hw_ctx { @@ -1199,15 +1199,15 @@ artpec6_crypto_ctr_crypt(struct skcipher_request *req, bool encrypt) pr_debug("counter %x will overflow (nblks %u), falling back\n", counter, counter + nblks); - ret = crypto_skcipher_setkey(ctx->fallback, ctx->aes_key, - ctx->key_length); + ret = crypto_sync_skcipher_setkey(ctx->fallback, ctx->aes_key, + ctx->key_length); if (ret) return ret; { - SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - skcipher_request_set_tfm(subreq, ctx->fallback); + skcipher_request_set_sync_tfm(subreq, ctx->fallback); skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, @@ -1561,10 +1561,9 @@ static int artpec6_crypto_aes_ctr_init(struct crypto_skcipher *tfm) { struct artpec6_cryptotfm_context *ctx = crypto_skcipher_ctx(tfm); - ctx->fallback = crypto_alloc_skcipher(crypto_tfm_alg_name(&tfm->base), - 0, - CRYPTO_ALG_ASYNC | - CRYPTO_ALG_NEED_FALLBACK); + ctx->fallback = + crypto_alloc_sync_skcipher(crypto_tfm_alg_name(&tfm->base), + 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ctx->fallback)) return PTR_ERR(ctx->fallback); @@ -1605,7 +1604,7 @@ static void artpec6_crypto_aes_ctr_exit(struct crypto_skcipher *tfm) { struct artpec6_cryptotfm_context *ctx = crypto_skcipher_ctx(tfm); - crypto_free_skcipher(ctx->fallback); + crypto_free_sync_skcipher(ctx->fallback); artpec6_crypto_aes_exit(tfm); } From patchwork Wed Sep 19 02:10:56 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605187 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 25A3C13AD for ; Wed, 19 Sep 2018 02:19:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1310A2BFEB for ; Wed, 19 Sep 2018 02:19:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0737B2BFFF; Wed, 19 Sep 2018 02:19:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7935D2BFEB for ; Wed, 19 Sep 2018 02:19:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730900AbeISHzG (ORCPT ); Wed, 19 Sep 2018 03:55:06 -0400 Received: from mail-pf1-f194.google.com ([209.85.210.194]:47024 "EHLO mail-pf1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728267AbeISHyx (ORCPT ); Wed, 19 Sep 2018 03:54:53 -0400 Received: by mail-pf1-f194.google.com with SMTP id u24-v6so1904275pfn.13 for ; Tue, 18 Sep 2018 19:19:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=QAt4ak+cBBxaqpmD5vgsGq4R8X7Wa4bk4mC/M1a5dlE=; b=aLmB2yvkueyDDVRiK6KrCfRwxCSZn6MZHZfgN7XWNhgLDZs6Yihaq83wkD4417vvke ZjpT/64YvjR1NcXpQy1ohRyjTlRMTSEJX1QzcbnT4xypOim1mBL+q8JUP+5kiPXEIkDZ Onm7kCVNN5Vxv6pG0VSo/TKCjbbOfgBkvKTn0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=QAt4ak+cBBxaqpmD5vgsGq4R8X7Wa4bk4mC/M1a5dlE=; b=EvPyX6KGJ/NZgRVeTOrVGyiy7w3jBzj4csJE3GI9tbCwhnW7agLQx2Z1G1C0YrftWo VqlzlON1sawAqQrx8Dsmi4LXiBdUPU2Iy/eczc5a7YS6ASQne8pdsMjTJqNdy/0tp13N AACxodHjCJ+fAkxyFBv8cYmyt1CQOI+1zrCuKQUnhH8cgKicCicgyzEcmXm8LwESgqxQ qJS2IbK9jkkjzBne61+4VwJuG79EBBVzvaPZRLmHwkhAbmjn0kdPs4pJYkQ3Fyb9WlbM Rkuqdos6j6oj6JQhXt2rhGFiEumTNK4+oS/k9mVYwCGM/1mK7DEruXii0F64mT++Yeu1 3P3g== X-Gm-Message-State: APzg51C6NoVe1+c5j6rKrpIzclUWwVK3+MLoEVEiZ67maW3j0VDQVOPu 3ZUZPkTpI1E4TddlzNep70uyXg== X-Google-Smtp-Source: ANB0VdZ/FOhgLUsEd+wQf4Znf0HRRfMY4dBsT4O8c1P1NCadFdpt6NukVApk3vKmHfziNLEP6q/2VA== X-Received: by 2002:a63:2bc9:: with SMTP id r192-v6mr30567287pgr.386.1537323563225; Tue, 18 Sep 2018 19:19:23 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id d81-v6sm31074217pfj.122.2018.09.18.19.19.18 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:19:18 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , Harsh Jain , Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 19/23] crypto: chelsio - Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:56 -0700 Message-Id: <20180919021100.3380-20-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: Harsh Jain Signed-off-by: Kees Cook --- drivers/crypto/chelsio/chcr_algo.c | 27 ++++++++++++++------------- drivers/crypto/chelsio/chcr_crypto.h | 2 +- 2 files changed, 15 insertions(+), 14 deletions(-) diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c index 5c539af8ed60..dfc3a10bb55b 100644 --- a/drivers/crypto/chelsio/chcr_algo.c +++ b/drivers/crypto/chelsio/chcr_algo.c @@ -671,7 +671,7 @@ static int chcr_sg_ent_in_wr(struct scatterlist *src, return min(srclen, dstlen); } -static int chcr_cipher_fallback(struct crypto_skcipher *cipher, +static int chcr_cipher_fallback(struct crypto_sync_skcipher *cipher, u32 flags, struct scatterlist *src, struct scatterlist *dst, @@ -681,9 +681,9 @@ static int chcr_cipher_fallback(struct crypto_skcipher *cipher, { int err; - SKCIPHER_REQUEST_ON_STACK(subreq, cipher); + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, cipher); - skcipher_request_set_tfm(subreq, cipher); + skcipher_request_set_sync_tfm(subreq, cipher); skcipher_request_set_callback(subreq, flags, NULL, NULL); skcipher_request_set_crypt(subreq, src, dst, nbytes, iv); @@ -854,13 +854,14 @@ static int chcr_cipher_fallback_setkey(struct crypto_ablkcipher *cipher, struct ablk_ctx *ablkctx = ABLK_CTX(c_ctx(cipher)); int err = 0; - crypto_skcipher_clear_flags(ablkctx->sw_cipher, CRYPTO_TFM_REQ_MASK); - crypto_skcipher_set_flags(ablkctx->sw_cipher, cipher->base.crt_flags & - CRYPTO_TFM_REQ_MASK); - err = crypto_skcipher_setkey(ablkctx->sw_cipher, key, keylen); + crypto_sync_skcipher_clear_flags(ablkctx->sw_cipher, + CRYPTO_TFM_REQ_MASK); + crypto_sync_skcipher_set_flags(ablkctx->sw_cipher, + cipher->base.crt_flags & CRYPTO_TFM_REQ_MASK); + err = crypto_sync_skcipher_setkey(ablkctx->sw_cipher, key, keylen); tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK; tfm->crt_flags |= - crypto_skcipher_get_flags(ablkctx->sw_cipher) & + crypto_sync_skcipher_get_flags(ablkctx->sw_cipher) & CRYPTO_TFM_RES_MASK; return err; } @@ -1360,8 +1361,8 @@ static int chcr_cra_init(struct crypto_tfm *tfm) struct chcr_context *ctx = crypto_tfm_ctx(tfm); struct ablk_ctx *ablkctx = ABLK_CTX(ctx); - ablkctx->sw_cipher = crypto_alloc_skcipher(alg->cra_name, 0, - CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK); + ablkctx->sw_cipher = crypto_alloc_sync_skcipher(alg->cra_name, 0, + CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ablkctx->sw_cipher)) { pr_err("failed to allocate fallback for %s\n", alg->cra_name); return PTR_ERR(ablkctx->sw_cipher); @@ -1390,8 +1391,8 @@ static int chcr_rfc3686_init(struct crypto_tfm *tfm) /*RFC3686 initialises IV counter value to 1, rfc3686(ctr(aes)) * cannot be used as fallback in chcr_handle_cipher_response */ - ablkctx->sw_cipher = crypto_alloc_skcipher("ctr(aes)", 0, - CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK); + ablkctx->sw_cipher = crypto_alloc_sync_skcipher("ctr(aes)", 0, + CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ablkctx->sw_cipher)) { pr_err("failed to allocate fallback for %s\n", alg->cra_name); return PTR_ERR(ablkctx->sw_cipher); @@ -1406,7 +1407,7 @@ static void chcr_cra_exit(struct crypto_tfm *tfm) struct chcr_context *ctx = crypto_tfm_ctx(tfm); struct ablk_ctx *ablkctx = ABLK_CTX(ctx); - crypto_free_skcipher(ablkctx->sw_cipher); + crypto_free_sync_skcipher(ablkctx->sw_cipher); if (ablkctx->aes_generic) crypto_free_cipher(ablkctx->aes_generic); } diff --git a/drivers/crypto/chelsio/chcr_crypto.h b/drivers/crypto/chelsio/chcr_crypto.h index 54835cb109e5..e26b72cfe4b6 100644 --- a/drivers/crypto/chelsio/chcr_crypto.h +++ b/drivers/crypto/chelsio/chcr_crypto.h @@ -170,7 +170,7 @@ static inline struct chcr_context *h_ctx(struct crypto_ahash *tfm) } struct ablk_ctx { - struct crypto_skcipher *sw_cipher; + struct crypto_sync_skcipher *sw_cipher; struct crypto_cipher *aes_generic; __be32 key_ctx_hdr; unsigned int enckey_len; From patchwork Wed Sep 19 02:10:57 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605185 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9DD7413AD for ; Wed, 19 Sep 2018 02:19:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8B1F12BFEB for ; Wed, 19 Sep 2018 02:19:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7F8922BFFF; Wed, 19 Sep 2018 02:19:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1FE882BFEB for ; Wed, 19 Sep 2018 02:19:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730854AbeISHzD (ORCPT ); Wed, 19 Sep 2018 03:55:03 -0400 Received: from mail-pg1-f195.google.com ([209.85.215.195]:43184 "EHLO mail-pg1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730900AbeISHyy (ORCPT ); Wed, 19 Sep 2018 03:54:54 -0400 Received: by mail-pg1-f195.google.com with SMTP id q19-v6so1087169pgn.10 for ; Tue, 18 Sep 2018 19:19:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=nX8MhHJeg4eisDy8vxqN/vLEivEa4yzpbmxml8l/MbY=; b=FOvEDzQPfthxL8BZJgzRXyF0feGHmaPYxCQHydY70vlhsYm5YQqZM3fktVJ0sRa3mW KjZuAMBjuweI0ue3Qm8boYUD8jM6hGUVFvc0l3VMlGn9PEzxftaUdkBK9ngSAHzwyp8g meEXoXvEoVLHrGqRHSL6vkzJJpzFGDA0FxIyY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=nX8MhHJeg4eisDy8vxqN/vLEivEa4yzpbmxml8l/MbY=; b=U+MipgwHV+PbYAhZ99y3OeOq/6+iCXjv1EVFUk1nM/lR46KHV3nVOUG2PuzbT/E01B YTTjqTGdIIDKLsDOucWKpiq1c5ktkaLWMHpja0NeY1itXYNuGJfb+TD280bf6gZcCzyq rtwZcCjlEUR+QJjNQIcs9Ip4XXR1fnr64Vfr/axOk82psjYsTJGLqoG01CMJXEYNNi5E ouQ/i6rwHVC4O2TiiiyYrVylReQAQ+B5e6D6F6wyAfxv8Vq9Qf5fo5V9P3p80EXnsaPF 2F4sDCeCmKhJwGt6axjUPd/CqVD+dWpTVJiAh62a9n5VgPvks3+iYWO/QnhumFGESb6L du+Q== X-Gm-Message-State: APzg51BwsOPfjPO8r1j5dOSAFypPlOiGOSP/FmmEaD3zCJ4HOu76C5ly kXcqO4vuNybskV8K/1iBns6G45nysPg= X-Google-Smtp-Source: ANB0VdYHHcQp3fcNwUTAshiSXh2NHB+9sPw3BSg9QWAOv53l+H1kwUkJaMzBEYtNBXEspTQXKtYWPQ== X-Received: by 2002:a63:6ecf:: with SMTP id j198-v6mr29732918pgc.3.1537323564374; Tue, 18 Sep 2018 19:19:24 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id z5-v6sm24605991pfh.83.2018.09.18.19.19.20 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:19:23 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 20/23] crypto: mxs-dcp - Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:57 -0700 Message-Id: <20180919021100.3380-21-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook --- drivers/crypto/mxs-dcp.c | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c index a10c418d4e5c..430174be6f92 100644 --- a/drivers/crypto/mxs-dcp.c +++ b/drivers/crypto/mxs-dcp.c @@ -84,7 +84,7 @@ struct dcp_async_ctx { unsigned int hot:1; /* Crypto-specific context */ - struct crypto_skcipher *fallback; + struct crypto_sync_skcipher *fallback; unsigned int key_len; uint8_t key[AES_KEYSIZE_128]; }; @@ -376,10 +376,10 @@ static int mxs_dcp_block_fallback(struct ablkcipher_request *req, int enc) { struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); struct dcp_async_ctx *ctx = crypto_ablkcipher_ctx(tfm); - SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); int ret; - skcipher_request_set_tfm(subreq, ctx->fallback); + skcipher_request_set_sync_tfm(subreq, ctx->fallback); skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, req->nbytes, req->info); @@ -460,16 +460,16 @@ static int mxs_dcp_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, * but is supported by in-kernel software implementation, we use * software fallback. */ - crypto_skcipher_clear_flags(actx->fallback, CRYPTO_TFM_REQ_MASK); - crypto_skcipher_set_flags(actx->fallback, + crypto_sync_skcipher_clear_flags(actx->fallback, CRYPTO_TFM_REQ_MASK); + crypto_sync_skcipher_set_flags(actx->fallback, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - ret = crypto_skcipher_setkey(actx->fallback, key, len); + ret = crypto_sync_skcipher_setkey(actx->fallback, key, len); if (!ret) return 0; tfm->base.crt_flags &= ~CRYPTO_TFM_RES_MASK; - tfm->base.crt_flags |= crypto_skcipher_get_flags(actx->fallback) & + tfm->base.crt_flags |= crypto_sync_skcipher_get_flags(actx->fallback) & CRYPTO_TFM_RES_MASK; return ret; @@ -478,11 +478,10 @@ static int mxs_dcp_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, static int mxs_dcp_aes_fallback_init(struct crypto_tfm *tfm) { const char *name = crypto_tfm_alg_name(tfm); - const uint32_t flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK; struct dcp_async_ctx *actx = crypto_tfm_ctx(tfm); - struct crypto_skcipher *blk; + struct crypto_sync_skcipher *blk; - blk = crypto_alloc_skcipher(name, 0, flags); + blk = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(blk)) return PTR_ERR(blk); @@ -495,7 +494,7 @@ static void mxs_dcp_aes_fallback_exit(struct crypto_tfm *tfm) { struct dcp_async_ctx *actx = crypto_tfm_ctx(tfm); - crypto_free_skcipher(actx->fallback); + crypto_free_sync_skcipher(actx->fallback); } /* From patchwork Wed Sep 19 02:10:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605145 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D16E96CB for ; Wed, 19 Sep 2018 02:11:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BAB712BF4E for ; Wed, 19 Sep 2018 02:11:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B90362C080; Wed, 19 Sep 2018 02:11:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 357CA2BFC0 for ; Wed, 19 Sep 2018 02:11:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731021AbeISHqu (ORCPT ); Wed, 19 Sep 2018 03:46:50 -0400 Received: from mail-pl1-f195.google.com ([209.85.214.195]:46674 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731019AbeISHqt (ORCPT ); Wed, 19 Sep 2018 03:46:49 -0400 Received: by mail-pl1-f195.google.com with SMTP id t19-v6so1850876ply.13 for ; Tue, 18 Sep 2018 19:11:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+Tj7v4HL+EFbAUO29Yph36tPWfB4V/ACZUQmXez3Df4=; b=iA8yyiD2vAOXymxeNdfJmuFil5rWrg8MFOi5/3KU/4OU9uY7AIYRcCs1yo67UEQDSx 4IEdT5l/mFpCn7LrAJ++BVF/zuzFRWZxkyhwT/tE8r2WTqlEi17fganPKZ/8R5RM3sqd O6yzwe6IFDtaS0814LtXvRbcNRunj6cqMaHpo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+Tj7v4HL+EFbAUO29Yph36tPWfB4V/ACZUQmXez3Df4=; b=DPi2JG5yV12vEpWDmcb8bczICp0lgtLZNGNDfkizfmrSNJmoErsmkKNje3mYiLa6ga FE9FPxK/WfYI8mPMBNF3JaNCqwZ+UgWGlnenX8bvKO8m7Zc6mZO6PcvpJNfn3epKJwsL kCZSdbRnt7xgnFsh2tecOColKI8RmCQ8FABo2fnjjjAPlolNtAciz851ZnZdHP0qlLg9 SzqFnMLj+3E6j2750CJBW2cxaSufrccYnuc34IJxDWa1AOWSdzgbYiH74DVD2bDSZdua Rgx4tqqTAnoz87Gv+B7m/qeCvDP8SHXbcEVS+RI9XOQGoRStCwJkjhfqnj9WLBiblp7j VCVQ== X-Gm-Message-State: APzg51BCyvDmRrieFjmawbcgJ4NFzFjbuMRWA2xNMI8eXOuKssD8CmN+ +4Yu1BpDsX0hxYerVROTXrQmVw== X-Google-Smtp-Source: ANB0VdbkWc94GU+6bakmhLfgGguFd0wj442pWI65r1OEyxfkUFFCi8cC49x8pd0tDj5my7VJY9CEDg== X-Received: by 2002:a17:902:74c8:: with SMTP id f8-v6mr16632282plt.95.1537323082521; Tue, 18 Sep 2018 19:11:22 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id m21-v6sm36363389pgd.6.2018.09.18.19.11.15 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:11:17 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 21/23] crypto: omap-aes - Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:58 -0700 Message-Id: <20180919021100.3380-22-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook --- drivers/crypto/omap-aes.c | 17 ++++++++--------- drivers/crypto/omap-aes.h | 2 +- 2 files changed, 9 insertions(+), 10 deletions(-) diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c index 9019f6b67986..a553ffddb11b 100644 --- a/drivers/crypto/omap-aes.c +++ b/drivers/crypto/omap-aes.c @@ -522,9 +522,9 @@ static int omap_aes_crypt(struct ablkcipher_request *req, unsigned long mode) !!(mode & FLAGS_CBC)); if (req->nbytes < aes_fallback_sz) { - SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->fallback); - skcipher_request_set_tfm(subreq, ctx->fallback); + skcipher_request_set_sync_tfm(subreq, ctx->fallback); skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, @@ -564,11 +564,11 @@ static int omap_aes_setkey(struct crypto_ablkcipher *tfm, const u8 *key, memcpy(ctx->key, key, keylen); ctx->keylen = keylen; - crypto_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK); - crypto_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags & + crypto_sync_skcipher_clear_flags(ctx->fallback, CRYPTO_TFM_REQ_MASK); + crypto_sync_skcipher_set_flags(ctx->fallback, tfm->base.crt_flags & CRYPTO_TFM_REQ_MASK); - ret = crypto_skcipher_setkey(ctx->fallback, key, keylen); + ret = crypto_sync_skcipher_setkey(ctx->fallback, key, keylen); if (!ret) return 0; @@ -613,11 +613,10 @@ static int omap_aes_crypt_req(struct crypto_engine *engine, static int omap_aes_cra_init(struct crypto_tfm *tfm) { const char *name = crypto_tfm_alg_name(tfm); - const u32 flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK; struct omap_aes_ctx *ctx = crypto_tfm_ctx(tfm); - struct crypto_skcipher *blk; + struct crypto_sync_skcipher *blk; - blk = crypto_alloc_skcipher(name, 0, flags); + blk = crypto_alloc_sync_skcipher(name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(blk)) return PTR_ERR(blk); @@ -667,7 +666,7 @@ static void omap_aes_cra_exit(struct crypto_tfm *tfm) struct omap_aes_ctx *ctx = crypto_tfm_ctx(tfm); if (ctx->fallback) - crypto_free_skcipher(ctx->fallback); + crypto_free_sync_skcipher(ctx->fallback); ctx->fallback = NULL; } diff --git a/drivers/crypto/omap-aes.h b/drivers/crypto/omap-aes.h index fc3b46a85809..7e02920ef6f8 100644 --- a/drivers/crypto/omap-aes.h +++ b/drivers/crypto/omap-aes.h @@ -101,7 +101,7 @@ struct omap_aes_ctx { int keylen; u32 key[AES_KEYSIZE_256 / sizeof(u32)]; u8 nonce[4]; - struct crypto_skcipher *fallback; + struct crypto_sync_skcipher *fallback; struct crypto_skcipher *ctr; }; From patchwork Wed Sep 19 02:10:59 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605189 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E20E215A6 for ; Wed, 19 Sep 2018 02:19:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CFCD02BFEB for ; Wed, 19 Sep 2018 02:19:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C42C52BFFF; Wed, 19 Sep 2018 02:19:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 28E492BFEB for ; Wed, 19 Sep 2018 02:19:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730873AbeISHyx (ORCPT ); Wed, 19 Sep 2018 03:54:53 -0400 Received: from mail-pl1-f193.google.com ([209.85.214.193]:46161 "EHLO mail-pl1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730851AbeISHyw (ORCPT ); Wed, 19 Sep 2018 03:54:52 -0400 Received: by mail-pl1-f193.google.com with SMTP id t19-v6so1859120ply.13 for ; Tue, 18 Sep 2018 19:19:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=raNi17qMsy3qz3vPZkj768GrqOCILiI6YaVltdqquAE=; b=A9DjIL3JTEHoLzAv2OHeypKei/INHBWQzqyqz6iiG4mq49wM7Y4eNEMqPpt0gksuLl thFtSZW5NcdJAk+PmxfC1n/UsAHOMY7Ma5XtHk3lHHUWg6QxkTYRnTFLtCFMFlYONZQh 1NjMex7X8q1okNQWm+deh+2LIMKYWxmXXWyAw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=raNi17qMsy3qz3vPZkj768GrqOCILiI6YaVltdqquAE=; b=YCQ9pAjmpJ8A01rQt8o2pt4u9t9KVN5M9eFuuN1YEMtpELUazNSxULZ6HL4zUT8gE3 dXI6dX7orN+sdQ0EjP2p9XZqLblGJaS7vwmVkeTYKM1H/eJRBN6dy3apPF/DQbw5SIMu bh+Qk+S0EFZGvIMX7xrDyxgUFR5piZ87hN+YxVgdR7i75YBgx0bOVrL8Dw0yh/C2ES4j hxmyxOkiGqadKvT4SErZmundKo2pthxTEl7XsmhKcDXDFgcoPkg6DV7JPhTAhyVFBuVx /ARyCo8F15mIXVQI5rfc5bnRi3KsPPlAGL/6ErIXRQuWcnNOcoMKFjSfZ1eXePW5Hgnw Mk6Q== X-Gm-Message-State: APzg51Cq+9kaFvVjG0wcOGdgBM/fdMZbdrDxFYr94XfyMzgsrM43RCg9 dDcwHJ2drBNehDPHl0KyOgRdQQ== X-Google-Smtp-Source: ANB0VdYkwPhSzdiZcF3NawZ/nZhJDnPNfmHeaa2Z36lULhaQfL/zFwIa7wf5TLl0TNvm2XzYmUndbg== X-Received: by 2002:a17:902:b7c5:: with SMTP id v5-v6mr32498180plz.49.1537323562272; Tue, 18 Sep 2018 19:19:22 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id e26-v6sm24230287pfi.70.2018.09.18.19.19.18 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:19:18 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , Jamie Iles , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 22/23] crypto: picoxcell - Remove VLA usage of skcipher Date: Tue, 18 Sep 2018 19:10:59 -0700 Message-Id: <20180919021100.3380-23-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: Jamie Iles Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Kees Cook --- drivers/crypto/picoxcell_crypto.c | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/drivers/crypto/picoxcell_crypto.c b/drivers/crypto/picoxcell_crypto.c index 321d5e2ac833..a28f1d18fe01 100644 --- a/drivers/crypto/picoxcell_crypto.c +++ b/drivers/crypto/picoxcell_crypto.c @@ -171,7 +171,7 @@ struct spacc_ablk_ctx { * The fallback cipher. If the operation can't be done in hardware, * fallback to a software version. */ - struct crypto_skcipher *sw_cipher; + struct crypto_sync_skcipher *sw_cipher; }; /* AEAD cipher context. */ @@ -799,17 +799,17 @@ static int spacc_aes_setkey(struct crypto_ablkcipher *cipher, const u8 *key, * Set the fallback transform to use the same request flags as * the hardware transform. */ - crypto_skcipher_clear_flags(ctx->sw_cipher, + crypto_sync_skcipher_clear_flags(ctx->sw_cipher, CRYPTO_TFM_REQ_MASK); - crypto_skcipher_set_flags(ctx->sw_cipher, + crypto_sync_skcipher_set_flags(ctx->sw_cipher, cipher->base.crt_flags & CRYPTO_TFM_REQ_MASK); - err = crypto_skcipher_setkey(ctx->sw_cipher, key, len); + err = crypto_sync_skcipher_setkey(ctx->sw_cipher, key, len); tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK; tfm->crt_flags |= - crypto_skcipher_get_flags(ctx->sw_cipher) & + crypto_sync_skcipher_get_flags(ctx->sw_cipher) & CRYPTO_TFM_RES_MASK; if (err) @@ -914,7 +914,7 @@ static int spacc_ablk_do_fallback(struct ablkcipher_request *req, struct crypto_tfm *old_tfm = crypto_ablkcipher_tfm(crypto_ablkcipher_reqtfm(req)); struct spacc_ablk_ctx *ctx = crypto_tfm_ctx(old_tfm); - SKCIPHER_REQUEST_ON_STACK(subreq, ctx->sw_cipher); + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->sw_cipher); int err; /* @@ -922,7 +922,7 @@ static int spacc_ablk_do_fallback(struct ablkcipher_request *req, * the ciphering has completed, put the old transform back into the * request. */ - skcipher_request_set_tfm(subreq, ctx->sw_cipher); + skcipher_request_set_sync_tfm(subreq, ctx->sw_cipher); skcipher_request_set_callback(subreq, req->base.flags, NULL, NULL); skcipher_request_set_crypt(subreq, req->src, req->dst, req->nbytes, req->info); @@ -1020,9 +1020,8 @@ static int spacc_ablk_cra_init(struct crypto_tfm *tfm) ctx->generic.flags = spacc_alg->type; ctx->generic.engine = engine; if (alg->cra_flags & CRYPTO_ALG_NEED_FALLBACK) { - ctx->sw_cipher = crypto_alloc_skcipher( - alg->cra_name, 0, CRYPTO_ALG_ASYNC | - CRYPTO_ALG_NEED_FALLBACK); + ctx->sw_cipher = crypto_alloc_sync_skcipher( + alg->cra_name, 0, CRYPTO_ALG_NEED_FALLBACK); if (IS_ERR(ctx->sw_cipher)) { dev_warn(engine->dev, "failed to allocate fallback for %s\n", alg->cra_name); @@ -1041,7 +1040,7 @@ static void spacc_ablk_cra_exit(struct crypto_tfm *tfm) { struct spacc_ablk_ctx *ctx = crypto_tfm_ctx(tfm); - crypto_free_skcipher(ctx->sw_cipher); + crypto_free_sync_skcipher(ctx->sw_cipher); } static int spacc_ablk_encrypt(struct ablkcipher_request *req) From patchwork Wed Sep 19 02:11:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10605183 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A41F415A6 for ; Wed, 19 Sep 2018 02:19:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 89B4C2BFEB for ; Wed, 19 Sep 2018 02:19:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7C2492BFFF; Wed, 19 Sep 2018 02:19:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1DEA62BFEB for ; Wed, 19 Sep 2018 02:19:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730724AbeISHyt (ORCPT ); Wed, 19 Sep 2018 03:54:49 -0400 Received: from mail-pf1-f196.google.com ([209.85.210.196]:44037 "EHLO mail-pf1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726771AbeISHyt (ORCPT ); Wed, 19 Sep 2018 03:54:49 -0400 Received: by mail-pf1-f196.google.com with SMTP id k21-v6so1906679pff.11 for ; Tue, 18 Sep 2018 19:19:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=oy5DbuF7CYPV4PtmLt5437WSeAaoA2AwlwpYpljL60w=; b=eiWWoIVDIlBYPTbbkdI9pCPyoWrnajXdH0DDRidVjrC3Ogtikd0A70Pqo7C3CPlRsh iE2K0zF1zlgK+qwRRrrKRTBPZ6VpxlboovR51hbd+e/lSD6wO/uzUudpD0DKRwH3ERyV RcSjg42JLpLhSQI4s5Q5qsaq00hFpn7lGdc1I= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=oy5DbuF7CYPV4PtmLt5437WSeAaoA2AwlwpYpljL60w=; b=mNdfVWKwtZkmPlFKRDR8e0JmEekr2hj/PA1JWqkJc4giUKiaVH20366ZVbl15uTkSR +9mNHH3brZYfXIx4UWj/72Fem34UgRinylqKdRDtc6yUZUlZvLazOsZgMfz0SGVLaqOT 3TsDodi3KIgkOITD4Q2ItTPatw+HXRi3TzfbIBP2edEeflhJgiXp3m2H3mVe18niDY5q JLtG+E4U2Iu/m5nZ0W/nONRLnXHqZRXuXYt29TKwhKD0TtW2HV2pVBYzaDeUCDEdBhsQ NZrQU46VGhl1vN0fqG0t/d5HEWpxLO/o6saUMH7JJguYQQ7sLTRBD5lU63BZD7kEzOmu vLTw== X-Gm-Message-State: APzg51ANGTvHV/XsPCB42BV2tlFBMZWgcT6s3d7soTe91kEOCI/mkRtk 6YHjrRL/9P8PlI3+YMSwv3HFkg== X-Google-Smtp-Source: ANB0Vdao0+AmDrii5YU1+N32s1AluRmUTxrYnXHC9BXCRqQdzsWMn7qD7cvVvFuK2EtF5yGY20K1Qw== X-Received: by 2002:a63:c245:: with SMTP id l5-v6mr30303669pgg.255.1537323559802; Tue, 18 Sep 2018 19:19:19 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id r1-v6sm39294690pfi.17.2018.09.18.19.19.18 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 18 Sep 2018 19:19:18 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , Ard Biesheuvel , Eric Biggers , linux-crypto , Linux Kernel Mailing List Subject: [PATCH crypto-next 23/23] crypto: skcipher - Remove SKCIPHER_REQUEST_ON_STACK() Date: Tue, 18 Sep 2018 19:11:00 -0700 Message-Id: <20180919021100.3380-24-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180919021100.3380-1-keescook@chromium.org> References: <20180919021100.3380-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now that all the users of the VLA-generating SKCIPHER_REQUEST_ON_STACK() macro have been moved to SYNC_SKCIPHER_REQUEST_ON_STACK(), we can remove the former. Signed-off-by: Kees Cook --- include/crypto/skcipher.h | 5 ----- 1 file changed, 5 deletions(-) diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h index d00ce90dc7da..45ae894fda32 100644 --- a/include/crypto/skcipher.h +++ b/include/crypto/skcipher.h @@ -156,11 +156,6 @@ struct skcipher_alg { ] CRYPTO_MINALIGN_ATTR; \ struct skcipher_request *name = (void *)__##name##_desc -#define SKCIPHER_REQUEST_ON_STACK(name, tfm) \ - char __##name##_desc[sizeof(struct skcipher_request) + \ - crypto_skcipher_reqsize(tfm)] CRYPTO_MINALIGN_ATTR; \ - struct skcipher_request *name = (void *)__##name##_desc - /** * DOC: Symmetric Key Cipher API *