From patchwork Wed Jun 20 19:04:06 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10478549 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 01C9760230 for ; Wed, 20 Jun 2018 19:06:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E935928DDF for ; Wed, 20 Jun 2018 19:06:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DE2FC28DEA; Wed, 20 Jun 2018 19:06:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 963AA28DE6 for ; Wed, 20 Jun 2018 19:06:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753541AbeFTTGb (ORCPT ); Wed, 20 Jun 2018 15:06:31 -0400 Received: from mail-pl0-f68.google.com ([209.85.160.68]:33901 "EHLO mail-pl0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754757AbeFTTE1 (ORCPT ); Wed, 20 Jun 2018 15:04:27 -0400 Received: by mail-pl0-f68.google.com with SMTP id g20-v6so286142plq.1 for ; Wed, 20 Jun 2018 12:04:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=W0im0N6bmEOYL9VVjoDo+sS382aFUcJ6FzTFQ/Uoino=; b=gxvzCcJnct0yKUzvsKvLGQkCtiN5VpubhNKF2EDU5EHRNlDaDnYYb5J4wKhuNRfNvG hBCFLQhrcPu01i6Pg1/ErWe4ZH5EuLktltoyH9n+s628U0bNXh0sV2Wna1244AccnOmq 6loO/HuyRSJNOsblnw5mWU5IZWRolFl18/69c= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=W0im0N6bmEOYL9VVjoDo+sS382aFUcJ6FzTFQ/Uoino=; b=RwVK7/mOaUwBQgHeRUfMypGqAo2p5t5kKUsPuoq1yGKJNB9SbqQGiM960lf4g5wIqO VNiw3L8pK2z0hC8RsncK9rKIliplW55WVpRbzwfzQZ6/cGvDSXyOnd5aubfPH4CQmf4G uHjfCBbC/8lERAPmuBou8FLQdcKvHTYCLyTbs2VyFRvfyPeh1FQmPxssLp+jIMu1zvh9 xbw+BqF1BlzoDlR+RZToBymosPbTeiOEdboOFPx2f1xhZ92zzQD020qi6V1zX3+x+aup 46wbrOpMDuGlWzbswC1aevR8yD8MlbFXr8J0v5HkR422n/phYIsjbS9j0LoWkbIVLgA7 gSQA== X-Gm-Message-State: APt69E3B754VfBRiXthFOHyguM7XlMU00LEHKcscf6Nvpbdlkeq7p2t2 VDAH+JNd10g8+Dude5pRB5pD9A== X-Google-Smtp-Source: ADUXVKKhS5uzezvWZsJBD0PY2NqWfKItKvo0ZTTsOpoGtbEooTak21GexUthUamkKL9T0UgW/ipF8Q== X-Received: by 2002:a17:902:981:: with SMTP id 1-v6mr25280462pln.11.1529521467245; Wed, 20 Jun 2018 12:04:27 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id d22-v6sm7532376pfk.126.2018.06.20.12.04.20 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 20 Jun 2018 12:04:22 -0700 (PDT) From: Kees Cook To: Herbert Xu Cc: Kees Cook , "Gustavo A. R. Silva" , Alasdair Kergon , Arnd Bergmann , Eric Biggers , Giovanni Cabiddu , Lars Persson , Mike Snitzer , Rabin Vincent , Tim Chen , "David S. Miller" , linux-crypto@vger.kernel.org, qat-linux@intel.com, dm-devel@redhat.com, linux-kernel@vger.kernel.org Subject: [PATCH 09/11] crypto: shash: Remove VLA usage in unaligned hashing Date: Wed, 20 Jun 2018 12:04:06 -0700 Message-Id: <20180620190408.45104-10-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180620190408.45104-1-keescook@chromium.org> References: <20180620190408.45104-1-keescook@chromium.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the quest to remove all stack VLA usage from the kernel[1], this uses the newly defined max alignment to perform unaligned hashing to avoid VLAs, and drops the helper function while adding sanity checks on the resulting buffer sizes. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook --- crypto/shash.c | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/crypto/shash.c b/crypto/shash.c index ab6902c6dae7..1bb58209330a 100644 --- a/crypto/shash.c +++ b/crypto/shash.c @@ -73,13 +73,6 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key, } EXPORT_SYMBOL_GPL(crypto_shash_setkey); -static inline unsigned int shash_align_buffer_size(unsigned len, - unsigned long mask) -{ - typedef u8 __aligned_largest u8_aligned; - return len + (mask & ~(__alignof__(u8_aligned) - 1)); -} - static int shash_update_unaligned(struct shash_desc *desc, const u8 *data, unsigned int len) { @@ -88,11 +81,14 @@ static int shash_update_unaligned(struct shash_desc *desc, const u8 *data, unsigned long alignmask = crypto_shash_alignmask(tfm); unsigned int unaligned_len = alignmask + 1 - ((unsigned long)data & alignmask); - u8 ubuf[shash_align_buffer_size(unaligned_len, alignmask)] - __aligned_largest; + u8 ubuf[CRYPTO_ALG_MAX_ALIGNMASK] + __aligned(CRYPTO_ALG_MAX_ALIGNMASK + 1); u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1); int err; + if (WARN_ON(buf + unaligned_len > ubuf + sizeof(ubuf))) + return -EINVAL; + if (unaligned_len > len) unaligned_len = len; @@ -124,11 +120,14 @@ static int shash_final_unaligned(struct shash_desc *desc, u8 *out) unsigned long alignmask = crypto_shash_alignmask(tfm); struct shash_alg *shash = crypto_shash_alg(tfm); unsigned int ds = crypto_shash_digestsize(tfm); - u8 ubuf[shash_align_buffer_size(ds, alignmask)] - __aligned_largest; + u8 ubuf[SHASH_MAX_DIGESTSIZE] + __aligned(CRYPTO_ALG_MAX_ALIGNMASK + 1); u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1); int err; + if (WARN_ON(buf + ds > ubuf + sizeof(ubuf))) + return -EINVAL; + err = shash->final(desc, buf); if (err) goto out;