From patchwork Mon Aug 20 14:58:34 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10570479 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 99EE35A4 for ; Mon, 20 Aug 2018 15:00:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8AAED2964C for ; Mon, 20 Aug 2018 15:00:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 890612965A; Mon, 20 Aug 2018 15:00:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 05D9529664 for ; Mon, 20 Aug 2018 15:00:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726686AbeHTSQ0 (ORCPT ); Mon, 20 Aug 2018 14:16:26 -0400 Received: from mail-ed1-f68.google.com ([209.85.208.68]:46289 "EHLO mail-ed1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726683AbeHTSQ0 (ORCPT ); Mon, 20 Aug 2018 14:16:26 -0400 Received: by mail-ed1-f68.google.com with SMTP id o8-v6so8678139edt.13 for ; Mon, 20 Aug 2018 08:00:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=OihrLhIVv2JljCo0K0xGpTZO2d2+q98KXNzOyiczuG8=; b=Y31w2BblXkqM+B0tDRSirdxWvbxh2mH47Po9wwYbe/CQvrgvPZ9GG3gK1fq5kJDlPx JEvJgVmyIiJOfjNyED5Am1T2TrcNbJ4LKBHD9Ngon2HEW7jUwkLC99PH8CoXyVZlg8T+ NKhZ4GOneSDrOCqKn0L3DOk0eEpRsTx9P3+kM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=OihrLhIVv2JljCo0K0xGpTZO2d2+q98KXNzOyiczuG8=; b=pkmDNe7D69p1eqNnM/5AwBu7fQ2iRlMTqwj5eKeQ4UgYPoMW7JsdJP6eDwSdlVSZ23 90wIFCeCx/PwAh/XSKO1762Ju345KbEBLD2ah3XOSaPPhjlHFm8mY7gywb7zrxWcsoQ0 W9fdzUWdMWdxsUo8BtBjQy7bLXk/aHeAOdqmb9VtsRAzOK9E3F5/w1IDv2hV7bL5IzjS InUOsDFCDVqBSX1m5bp1KpUQrYsq5l0b8METH1GYBhUAsMSvLOi2PzK1MuTdUw055Jwb lbmy7NeGtsgStuimVE1YfEkO9miyj6oisqQzS7BA4hxakvM1wIEPTdFySrzVw/K0asK/ r5hQ== X-Gm-Message-State: AOUpUlEAijFKdEnc5CKEFzBP6otIXcHolLWkLJXOv1hHLT/1oGa3JLkr b2u+E1dxxk+MmPazuEx7EY9s+O8ubgPf4w== X-Google-Smtp-Source: AA+uWPzYa80Oh9y8g5zQttAS37Bb6bI6AgmZwoq19Op5JEm4U94VRz/a3GwCOkvsS4HtUd4rcbcoUw== X-Received: by 2002:a50:98c1:: with SMTP id j59-v6mr55404076edb.212.1534777224512; Mon, 20 Aug 2018 08:00:24 -0700 (PDT) Received: from Francien-PC.arnhem.chello.nl (dhcp-077-251-017-237.chello.nl. [77.251.17.237]) by smtp.gmail.com with ESMTPSA id x7-v6sm7902113edm.23.2018.08.20.08.00.23 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 20 Aug 2018 08:00:23 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, vakul.garg@nxp.com, davejwatson@fb.com, peter.doliwa@nxp.com, Ard Biesheuvel Subject: [PATCH] crypto: arm64/aes-gcm-ce - fix scatterwalk API violation Date: Mon, 20 Aug 2018 16:58:34 +0200 Message-Id: <20180820145834.5916-1-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.11.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Commit 71e52c278c54 ("crypto: arm64/aes-ce-gcm - operate on two input blocks at a time") modified the granularity at which the AES/GCM code processes its input to allow subsequent changes to be applied that improve performance by using aggregation to process multiple input blocks at once. For this reason, it doubled the algorithm's 'chunksize' property to 2 x AES_BLOCK_SIZE, but retained the non-SIMD fallback path that processes a single block at a time. In some cases, this violates the skcipher scatterwalk API, by calling skcipher_walk_done() with a non-zero residue value for a chunk that is expected to be handled in its entirety. This results in a WARN_ON() to be hit by the TLS self test code, but is likely to break other user cases as well. Unfortunately, none of the current test cases exercises this exact code path at the moment. Fixes: 71e52c278c54 ("crypto: arm64/aes-ce-gcm - operate on two ...") Reported-by: Vakul Garg Signed-off-by: Ard Biesheuvel Tested-by: Vakul Garg --- arch/arm64/crypto/ghash-ce-glue.c | 29 +++++++++++++++++++++++------ 1 file changed, 23 insertions(+), 6 deletions(-) diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c index 6e9f33d14930..067d8937d5af 100644 --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -417,7 +417,7 @@ static int gcm_encrypt(struct aead_request *req) __aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv, nrounds); put_unaligned_be32(2, iv + GCM_IV_SIZE); - while (walk.nbytes >= AES_BLOCK_SIZE) { + while (walk.nbytes >= (2 * AES_BLOCK_SIZE)) { int blocks = walk.nbytes / AES_BLOCK_SIZE; u8 *dst = walk.dst.virt.addr; u8 *src = walk.src.virt.addr; @@ -437,11 +437,18 @@ static int gcm_encrypt(struct aead_request *req) NULL); err = skcipher_walk_done(&walk, - walk.nbytes % AES_BLOCK_SIZE); + walk.nbytes % (2 * AES_BLOCK_SIZE)); } - if (walk.nbytes) + if (walk.nbytes) { __aes_arm64_encrypt(ctx->aes_key.key_enc, ks, iv, nrounds); + if (walk.nbytes > AES_BLOCK_SIZE) { + crypto_inc(iv, AES_BLOCK_SIZE); + __aes_arm64_encrypt(ctx->aes_key.key_enc, + ks + AES_BLOCK_SIZE, iv, + nrounds); + } + } } /* handle the tail */ @@ -545,7 +552,7 @@ static int gcm_decrypt(struct aead_request *req) __aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv, nrounds); put_unaligned_be32(2, iv + GCM_IV_SIZE); - while (walk.nbytes >= AES_BLOCK_SIZE) { + while (walk.nbytes >= (2 * AES_BLOCK_SIZE)) { int blocks = walk.nbytes / AES_BLOCK_SIZE; u8 *dst = walk.dst.virt.addr; u8 *src = walk.src.virt.addr; @@ -564,11 +571,21 @@ static int gcm_decrypt(struct aead_request *req) } while (--blocks > 0); err = skcipher_walk_done(&walk, - walk.nbytes % AES_BLOCK_SIZE); + walk.nbytes % (2 * AES_BLOCK_SIZE)); } - if (walk.nbytes) + if (walk.nbytes) { + if (walk.nbytes > AES_BLOCK_SIZE) { + u8 *iv2 = iv + AES_BLOCK_SIZE; + + memcpy(iv2, iv, AES_BLOCK_SIZE); + crypto_inc(iv2, AES_BLOCK_SIZE); + + __aes_arm64_encrypt(ctx->aes_key.key_enc, iv2, + iv2, nrounds); + } __aes_arm64_encrypt(ctx->aes_key.key_enc, iv, iv, nrounds); + } } /* handle the tail */