From patchwork Wed Apr 10 06:46:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 10893261 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0997114DB for ; Wed, 10 Apr 2019 06:48:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E3B8D289E9 for ; Wed, 10 Apr 2019 06:48:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D7DFB289ED; Wed, 10 Apr 2019 06:48:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 733AC289E2 for ; Wed, 10 Apr 2019 06:48:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728718AbfDJGsr (ORCPT ); Wed, 10 Apr 2019 02:48:47 -0400 Received: from mail.kernel.org ([198.145.29.99]:53332 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728706AbfDJGsq (ORCPT ); Wed, 10 Apr 2019 02:48:46 -0400 Received: from sol.localdomain (c-24-5-143-220.hsd1.ca.comcast.net [24.5.143.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3FC1E217D6; Wed, 10 Apr 2019 06:48:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1554878926; bh=VEYDOmBBHr97pWKSFhOopAvxRZYpAtM8mBU2YLJICzc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tzKSIG8BmDrUQwhxPwDOj3Cb2wwIAP63OyFmJ3ibOvuBUIN/MS0KSowGUy9UVTdWi 11IbSZwrG9mPiDpeLmpcFugwoVzyl5tReEJlvyWHhIokglh7q+mCki5/AEVGxua4WB jwhTfgyvDCP6MmTBN6t9HUh/uNdcJiPNal2TVbQ8= From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: stable@vger.kernel.org, Ondrej Mosnacek Subject: [PATCH v2 1/7] crypto: lrw - don't access already-freed walk.iv Date: Tue, 9 Apr 2019 23:46:29 -0700 Message-Id: <20190410064635.11813-2-ebiggers@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190410064635.11813-1-ebiggers@kernel.org> References: <20190410064635.11813-1-ebiggers@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Eric Biggers If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. Fix this in the LRW template by checking the return value of skcipher_walk_virt(). This bug was detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. When the extra self-tests were run on a KASAN-enabled kernel, a KASAN use-after-free splat occured during lrw(aes) testing. Fixes: c778f96bf347 ("crypto: lrw - Optimize tweak computation") Cc: # v4.20+ Cc: Ondrej Mosnacek Signed-off-by: Eric Biggers --- crypto/lrw.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/crypto/lrw.c b/crypto/lrw.c index 0430ccd087286..b6666c595a686 100644 --- a/crypto/lrw.c +++ b/crypto/lrw.c @@ -162,8 +162,10 @@ static int xor_tweak(struct skcipher_request *req, bool second_pass) } err = skcipher_walk_virt(&w, req, false); - iv = (__be32 *)w.iv; + if (err) + return err; + iv = (__be32 *)w.iv; counter[0] = be32_to_cpu(iv[3]); counter[1] = be32_to_cpu(iv[2]); counter[2] = be32_to_cpu(iv[1]); From patchwork Wed Apr 10 06:46:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 10893273 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 65B41139A for ; Wed, 10 Apr 2019 06:48:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4EDD4289E2 for ; Wed, 10 Apr 2019 06:48:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 43262289E9; Wed, 10 Apr 2019 06:48:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E88FE289E2 for ; Wed, 10 Apr 2019 06:48:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728722AbfDJGsr (ORCPT ); Wed, 10 Apr 2019 02:48:47 -0400 Received: from mail.kernel.org ([198.145.29.99]:53342 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728708AbfDJGsr (ORCPT ); Wed, 10 Apr 2019 02:48:47 -0400 Received: from sol.localdomain (c-24-5-143-220.hsd1.ca.comcast.net [24.5.143.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 802A2217D9; Wed, 10 Apr 2019 06:48:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1554878926; bh=U8XMb5XBzsDgxfH9TV1QVq2N0dYg0CmuZps41VfC8mY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=0DjnRC8NvZCI+ZWPdJCpwvbJ6/+Y8VlHyBYK1dgZ0w6OFgI3vy7GVmQSgjdKqDHqA 0Vdr+Uyx/KxVH6x+OtuPqUR3dY8AfIVNEtTqlIc9osv99Q02mILG4WhvzNqNj8xNm/ 9jWkHO2g07Im2LFnp5gjZx2L5r97rnVVOOPyHTws= From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: stable@vger.kernel.org Subject: [PATCH v2 2/7] crypto: salsa20 - don't access already-freed walk.iv Date: Tue, 9 Apr 2019 23:46:30 -0700 Message-Id: <20190410064635.11813-3-ebiggers@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190410064635.11813-1-ebiggers@kernel.org> References: <20190410064635.11813-1-ebiggers@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Eric Biggers If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. salsa20-generic doesn't set an alignmask, so currently it isn't affected by this despite unconditionally accessing walk.iv. However this is more subtle than desired, and it was actually broken prior to the alignmask being removed by commit b62b3db76f73 ("crypto: salsa20-generic - cleanup and convert to skcipher API"). Since salsa20-generic does not update the IV and does not need any IV alignment, update it to use req->iv instead of walk.iv. Fixes: 2407d60872dd ("[CRYPTO] salsa20: Salsa20 stream cipher") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers --- crypto/salsa20_generic.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/crypto/salsa20_generic.c b/crypto/salsa20_generic.c index 443fba09cbed7..faed244be316f 100644 --- a/crypto/salsa20_generic.c +++ b/crypto/salsa20_generic.c @@ -160,7 +160,7 @@ static int salsa20_crypt(struct skcipher_request *req) err = skcipher_walk_virt(&walk, req, false); - salsa20_init(state, ctx, walk.iv); + salsa20_init(state, ctx, req->iv); while (walk.nbytes > 0) { unsigned int nbytes = walk.nbytes; From patchwork Wed Apr 10 06:46:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 10893271 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A7B5F139A for ; Wed, 10 Apr 2019 06:48:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 910CE289E2 for ; Wed, 10 Apr 2019 06:48:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 855B2289E5; Wed, 10 Apr 2019 06:48:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 382F2289F1 for ; Wed, 10 Apr 2019 06:48:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728723AbfDJGsr (ORCPT ); Wed, 10 Apr 2019 02:48:47 -0400 Received: from mail.kernel.org ([198.145.29.99]:53352 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728710AbfDJGsr (ORCPT ); Wed, 10 Apr 2019 02:48:47 -0400 Received: from sol.localdomain (c-24-5-143-220.hsd1.ca.comcast.net [24.5.143.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B9078217F4; Wed, 10 Apr 2019 06:48:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1554878926; bh=XtbcAfD4X6f5TVPRHGxFRL2Qg/zb9DIuNRWW2Fqv5gI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sG9VL2kR4v3cKhDbTrQmSwONlSI/BtNMHx34rN0D7Rn7wRyi+qXIEfkBU5HOGoOds HC1vU24W38fU4mbJjChNhQcl8ON63k+HV2yzR8r4zQ9q6ALkk3gq4YWtdP5nzOz9Wh mVpOn1OPqqTWplffBw0no2URkV+R3MtQVEVSL3z0= From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: stable@vger.kernel.org Subject: [PATCH v2 3/7] crypto: arm/aes-neonbs - don't access already-freed walk.iv Date: Tue, 9 Apr 2019 23:46:31 -0700 Message-Id: <20190410064635.11813-4-ebiggers@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190410064635.11813-1-ebiggers@kernel.org> References: <20190410064635.11813-1-ebiggers@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Eric Biggers If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. arm32 xts-aes-neonbs doesn't set an alignmask, so currently it isn't affected by this despite unconditionally accessing walk.iv. However this is more subtle than desired, and it was actually broken prior to the alignmask being removed by commit cc477bf64573 ("crypto: arm/aes - replace bit-sliced OpenSSL NEON code"). Thus, update xts-aes-neonbs to start checking the return value of skcipher_walk_virt(). Fixes: e4e7f10bfc40 ("ARM: add support for bit sliced AES using NEON instructions") Cc: # v3.13+ Signed-off-by: Eric Biggers --- arch/arm/crypto/aes-neonbs-glue.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/arm/crypto/aes-neonbs-glue.c b/arch/arm/crypto/aes-neonbs-glue.c index 07e31941dc674..617c2c99ebfb3 100644 --- a/arch/arm/crypto/aes-neonbs-glue.c +++ b/arch/arm/crypto/aes-neonbs-glue.c @@ -278,6 +278,8 @@ static int __xts_crypt(struct skcipher_request *req, int err; err = skcipher_walk_virt(&walk, req, true); + if (err) + return err; crypto_cipher_encrypt_one(ctx->tweak_tfm, walk.iv, walk.iv); From patchwork Wed Apr 10 06:46:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 10893275 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C48D317EF for ; Wed, 10 Apr 2019 06:48:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AD969289E2 for ; Wed, 10 Apr 2019 06:48:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A1F33289E5; Wed, 10 Apr 2019 06:48:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 52500289EE for ; Wed, 10 Apr 2019 06:48:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728662AbfDJGsw (ORCPT ); Wed, 10 Apr 2019 02:48:52 -0400 Received: from mail.kernel.org ([198.145.29.99]:53356 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728716AbfDJGsr (ORCPT ); Wed, 10 Apr 2019 02:48:47 -0400 Received: from sol.localdomain (c-24-5-143-220.hsd1.ca.comcast.net [24.5.143.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id F1BD42083E; Wed, 10 Apr 2019 06:48:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1554878927; bh=WYZz6UW+QsmJS/csSZiX17NE+KNrkgeCJgokTzb+BeM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TbPzgj5+iOWlnXt/ustaNLa5flfifrBunU1BEusJf/F7dBht/A3hi5cFskXNtDP6L VlgeBx4cW7kcdIjPLZK6LkmT6wrtRY397pNubcGJET2pO9RcPNruz2UDyYP0CuXo0Z +GNFtDUF6b7oucEDjNHQqSSD1qpE5htAK1nVOVEY= From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: stable@vger.kernel.org Subject: [PATCH v2 4/7] crypto: arm64/aes-neonbs - don't access already-freed walk.iv Date: Tue, 9 Apr 2019 23:46:32 -0700 Message-Id: <20190410064635.11813-5-ebiggers@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190410064635.11813-1-ebiggers@kernel.org> References: <20190410064635.11813-1-ebiggers@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Eric Biggers If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. xts-aes-neonbs doesn't set an alignmask, so currently it isn't affected by this despite unconditionally accessing walk.iv. However this is more subtle than desired, and unconditionally accessing walk.iv has caused a real problem in other algorithms. Thus, update xts-aes-neonbs to start checking the return value of skcipher_walk_virt(). Fixes: 1abee99eafab ("crypto: arm64/aes - reimplement bit-sliced ARM/NEON implementation for arm64") Cc: # v4.11+ Signed-off-by: Eric Biggers --- arch/arm64/crypto/aes-neonbs-glue.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/arm64/crypto/aes-neonbs-glue.c b/arch/arm64/crypto/aes-neonbs-glue.c index 4737b6c6c5cf5..5144551177334 100644 --- a/arch/arm64/crypto/aes-neonbs-glue.c +++ b/arch/arm64/crypto/aes-neonbs-glue.c @@ -304,6 +304,8 @@ static int __xts_crypt(struct skcipher_request *req, int err; err = skcipher_walk_virt(&walk, req, false); + if (err) + return err; kernel_neon_begin(); neon_aes_ecb_encrypt(walk.iv, walk.iv, ctx->twkey, ctx->key.rounds, 1); From patchwork Wed Apr 10 06:46:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 10893267 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 386FB14DB for ; Wed, 10 Apr 2019 06:48:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1B14A289E2 for ; Wed, 10 Apr 2019 06:48:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0F6BB289EE; Wed, 10 Apr 2019 06:48:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 84FA5289E2 for ; Wed, 10 Apr 2019 06:48:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728642AbfDJGst (ORCPT ); Wed, 10 Apr 2019 02:48:49 -0400 Received: from mail.kernel.org ([198.145.29.99]:53368 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728695AbfDJGss (ORCPT ); Wed, 10 Apr 2019 02:48:48 -0400 Received: from sol.localdomain (c-24-5-143-220.hsd1.ca.comcast.net [24.5.143.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 36EE02183E; Wed, 10 Apr 2019 06:48:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1554878927; bh=x7vzV1aHpywCshuQcqAmTQ16gOFCRJdSdDA7pzJ7C54=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ghWenNmymKl6mKqkPeav/Y4XMwLINwgW3NfqxqOK2Red5WwBnDG+um9vIArc6rKxh V9xhxAAzGxvlGJOMYnae8u2eh9w3EF56HctSixtr0inknLVhO+W2aCrUEFz5AHaqb3 Gw95IWImiIrsmVOpyFETLL8vihN1LTykp9UXShUU= From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: stable@vger.kernel.org Subject: [PATCH v2 5/7] crypto: gcm - fix incompatibility between "gcm" and "gcm_base" Date: Tue, 9 Apr 2019 23:46:33 -0700 Message-Id: <20190410064635.11813-6-ebiggers@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190410064635.11813-1-ebiggers@kernel.org> References: <20190410064635.11813-1-ebiggers@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Eric Biggers GCM instances can be created by either the "gcm" template, which only allows choosing the block cipher, e.g. "gcm(aes)"; or by "gcm_base", which allows choosing the ctr and ghash implementations, e.g. "gcm_base(ctr(aes-generic),ghash-generic)". However, a "gcm_base" instance prevents a "gcm" instance from being registered using the same implementations. Nor will the instance be found by lookups of "gcm". This can be used as a denial of service. Moreover, "gcm_base" instances are never tested by the crypto self-tests, even if there are compatible "gcm" tests. The root cause of these problems is that instances of the two templates use different cra_names. Therefore, fix these problems by making "gcm_base" instances set the same cra_name as "gcm" instances, e.g. "gcm(aes)" instead of "gcm_base(ctr(aes-generic),ghash-generic)". This requires extracting the block cipher name from the name of the ctr algorithm. It also requires starting to verify that the algorithms are really ctr and ghash, not something else entirely. But it would be bizarre if anyone were actually using non-gcm-compatible algorithms with gcm_base, so this shouldn't break anyone in practice. Fixes: d00aa19b507b ("[CRYPTO] gcm: Allow block cipher parameter") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers --- crypto/gcm.c | 32 +++++++++----------------------- 1 file changed, 9 insertions(+), 23 deletions(-) diff --git a/crypto/gcm.c b/crypto/gcm.c index e1a11f529d257..c7354ed1fa4d4 100644 --- a/crypto/gcm.c +++ b/crypto/gcm.c @@ -597,7 +597,6 @@ static void crypto_gcm_free(struct aead_instance *inst) static int crypto_gcm_create_common(struct crypto_template *tmpl, struct rtattr **tb, - const char *full_name, const char *ctr_name, const char *ghash_name) { @@ -638,7 +637,7 @@ static int crypto_gcm_create_common(struct crypto_template *tmpl, goto err_free_inst; err = -EINVAL; - if (ghash->digestsize != 16) + if (strcmp(ghash->base.cra_name, "ghash") != 0) goto err_drop_ghash; crypto_set_skcipher_spawn(&ctx->ctr, aead_crypto_instance(inst)); @@ -650,24 +649,23 @@ static int crypto_gcm_create_common(struct crypto_template *tmpl, ctr = crypto_spawn_skcipher_alg(&ctx->ctr); - /* We only support 16-byte blocks. */ + /* The skcipher algorithm must be CTR mode, using 16-byte blocks. */ err = -EINVAL; - if (crypto_skcipher_alg_ivsize(ctr) != 16) + if (strncmp(ctr->base.cra_name, "ctr(", 4) != 0 || + crypto_skcipher_alg_ivsize(ctr) != 16) goto out_put_ctr; - /* Not a stream cipher? */ - if (ctr->base.cra_blocksize != 1) + err = -ENAMETOOLONG; + if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, + "gcm(%s", ctr->base.cra_name + 4) >= CRYPTO_MAX_ALG_NAME) goto out_put_ctr; - err = -ENAMETOOLONG; if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "gcm_base(%s,%s)", ctr->base.cra_driver_name, ghash_alg->cra_driver_name) >= CRYPTO_MAX_ALG_NAME) goto out_put_ctr; - memcpy(inst->alg.base.cra_name, full_name, CRYPTO_MAX_ALG_NAME); - inst->alg.base.cra_flags = (ghash->base.cra_flags | ctr->base.cra_flags) & CRYPTO_ALG_ASYNC; inst->alg.base.cra_priority = (ghash->base.cra_priority + @@ -709,7 +707,6 @@ static int crypto_gcm_create(struct crypto_template *tmpl, struct rtattr **tb) { const char *cipher_name; char ctr_name[CRYPTO_MAX_ALG_NAME]; - char full_name[CRYPTO_MAX_ALG_NAME]; cipher_name = crypto_attr_alg_name(tb[1]); if (IS_ERR(cipher_name)) @@ -719,12 +716,7 @@ static int crypto_gcm_create(struct crypto_template *tmpl, struct rtattr **tb) CRYPTO_MAX_ALG_NAME) return -ENAMETOOLONG; - if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "gcm(%s)", cipher_name) >= - CRYPTO_MAX_ALG_NAME) - return -ENAMETOOLONG; - - return crypto_gcm_create_common(tmpl, tb, full_name, - ctr_name, "ghash"); + return crypto_gcm_create_common(tmpl, tb, ctr_name, "ghash"); } static int crypto_gcm_base_create(struct crypto_template *tmpl, @@ -732,7 +724,6 @@ static int crypto_gcm_base_create(struct crypto_template *tmpl, { const char *ctr_name; const char *ghash_name; - char full_name[CRYPTO_MAX_ALG_NAME]; ctr_name = crypto_attr_alg_name(tb[1]); if (IS_ERR(ctr_name)) @@ -742,12 +733,7 @@ static int crypto_gcm_base_create(struct crypto_template *tmpl, if (IS_ERR(ghash_name)) return PTR_ERR(ghash_name); - if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "gcm_base(%s,%s)", - ctr_name, ghash_name) >= CRYPTO_MAX_ALG_NAME) - return -ENAMETOOLONG; - - return crypto_gcm_create_common(tmpl, tb, full_name, - ctr_name, ghash_name); + return crypto_gcm_create_common(tmpl, tb, ctr_name, ghash_name); } static int crypto_rfc4106_setkey(struct crypto_aead *parent, const u8 *key, From patchwork Wed Apr 10 06:46:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 10893265 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7C13D139A for ; Wed, 10 Apr 2019 06:48:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 62F15289E2 for ; Wed, 10 Apr 2019 06:48:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 520B1289E9; Wed, 10 Apr 2019 06:48:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C3EEF289E2 for ; Wed, 10 Apr 2019 06:48:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728710AbfDJGss (ORCPT ); Wed, 10 Apr 2019 02:48:48 -0400 Received: from mail.kernel.org ([198.145.29.99]:53352 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728721AbfDJGss (ORCPT ); Wed, 10 Apr 2019 02:48:48 -0400 Received: from sol.localdomain (c-24-5-143-220.hsd1.ca.comcast.net [24.5.143.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6F5C7217F9; Wed, 10 Apr 2019 06:48:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1554878927; bh=DSKp+hJvQwS1YK+d6ixSmvgXn1wvYr8vfysXcKoP1LE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QAjn1SqACpJ/2SyWVbwwr5X9yFRTgNhOQJRk8FQxzwNBzfXXjHnPQjiZEXFkE/0NN uHPyL9v76GfJoCWHhTPsSnrO+ssv13ShRzR0hHkJ3Lsy8+Dynb4b6elVTj3ZwijtC1 yESFMhF3kCvOPc80ozvRW9hB+lnb5fJySSxTdiyE= From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: stable@vger.kernel.org Subject: [PATCH v2 6/7] crypto: ccm - fix incompatibility between "ccm" and "ccm_base" Date: Tue, 9 Apr 2019 23:46:34 -0700 Message-Id: <20190410064635.11813-7-ebiggers@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190410064635.11813-1-ebiggers@kernel.org> References: <20190410064635.11813-1-ebiggers@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Eric Biggers CCM instances can be created by either the "ccm" template, which only allows choosing the block cipher, e.g. "ccm(aes)"; or by "ccm_base", which allows choosing the ctr and cbcmac implementations, e.g. "ccm_base(ctr(aes-generic),cbcmac(aes-generic))". However, a "ccm_base" instance prevents a "ccm" instance from being registered using the same implementations. Nor will the instance be found by lookups of "ccm". This can be used as a denial of service. Moreover, "ccm_base" instances are never tested by the crypto self-tests, even if there are compatible "ccm" tests. The root cause of these problems is that instances of the two templates use different cra_names. Therefore, fix these problems by making "ccm_base" instances set the same cra_name as "ccm" instances, e.g. "ccm(aes)" instead of "ccm_base(ctr(aes-generic),cbcmac(aes-generic))". This requires extracting the block cipher name from the name of the ctr and cbcmac algorithms. It also requires starting to verify that the algorithms are really ctr and cbcmac using the same block cipher, not something else entirely. But it would be bizarre if anyone were actually using non-ccm-compatible algorithms with ccm_base, so this shouldn't break anyone in practice. Fixes: 4a49b499dfa0 ("[CRYPTO] ccm: Added CCM mode") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers --- crypto/ccm.c | 43 +++++++++++++++++-------------------------- 1 file changed, 17 insertions(+), 26 deletions(-) diff --git a/crypto/ccm.c b/crypto/ccm.c index 50df8f001c1c9..dbb3b6d8e6136 100644 --- a/crypto/ccm.c +++ b/crypto/ccm.c @@ -458,7 +458,6 @@ static void crypto_ccm_free(struct aead_instance *inst) static int crypto_ccm_create_common(struct crypto_template *tmpl, struct rtattr **tb, - const char *full_name, const char *ctr_name, const char *mac_name) { @@ -486,7 +485,8 @@ static int crypto_ccm_create_common(struct crypto_template *tmpl, mac = __crypto_hash_alg_common(mac_alg); err = -EINVAL; - if (mac->digestsize != 16) + if (strncmp(mac->base.cra_name, "cbcmac(", 7) != 0 || + mac->digestsize != 16) goto out_put_mac; inst = kzalloc(sizeof(*inst) + sizeof(*ictx), GFP_KERNEL); @@ -509,23 +509,26 @@ static int crypto_ccm_create_common(struct crypto_template *tmpl, ctr = crypto_spawn_skcipher_alg(&ictx->ctr); - /* Not a stream cipher? */ + /* The skcipher algorithm must be CTR mode, using 16-byte blocks. */ err = -EINVAL; - if (ctr->base.cra_blocksize != 1) + if (strncmp(ctr->base.cra_name, "ctr(", 4) != 0 || + crypto_skcipher_alg_ivsize(ctr) != 16) goto err_drop_ctr; - /* We want the real thing! */ - if (crypto_skcipher_alg_ivsize(ctr) != 16) + /* ctr and cbcmac must use the same underlying block cipher. */ + if (strcmp(ctr->base.cra_name + 4, mac->base.cra_name + 7) != 0) goto err_drop_ctr; err = -ENAMETOOLONG; + if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, + "ccm(%s", ctr->base.cra_name + 4) >= CRYPTO_MAX_ALG_NAME) + goto err_drop_ctr; + if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "ccm_base(%s,%s)", ctr->base.cra_driver_name, mac->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME) goto err_drop_ctr; - memcpy(inst->alg.base.cra_name, full_name, CRYPTO_MAX_ALG_NAME); - inst->alg.base.cra_flags = ctr->base.cra_flags & CRYPTO_ALG_ASYNC; inst->alg.base.cra_priority = (mac->base.cra_priority + ctr->base.cra_priority) / 2; @@ -567,7 +570,6 @@ static int crypto_ccm_create(struct crypto_template *tmpl, struct rtattr **tb) const char *cipher_name; char ctr_name[CRYPTO_MAX_ALG_NAME]; char mac_name[CRYPTO_MAX_ALG_NAME]; - char full_name[CRYPTO_MAX_ALG_NAME]; cipher_name = crypto_attr_alg_name(tb[1]); if (IS_ERR(cipher_name)) @@ -581,35 +583,24 @@ static int crypto_ccm_create(struct crypto_template *tmpl, struct rtattr **tb) cipher_name) >= CRYPTO_MAX_ALG_NAME) return -ENAMETOOLONG; - if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "ccm(%s)", cipher_name) >= - CRYPTO_MAX_ALG_NAME) - return -ENAMETOOLONG; - - return crypto_ccm_create_common(tmpl, tb, full_name, ctr_name, - mac_name); + return crypto_ccm_create_common(tmpl, tb, ctr_name, mac_name); } static int crypto_ccm_base_create(struct crypto_template *tmpl, struct rtattr **tb) { const char *ctr_name; - const char *cipher_name; - char full_name[CRYPTO_MAX_ALG_NAME]; + const char *mac_name; ctr_name = crypto_attr_alg_name(tb[1]); if (IS_ERR(ctr_name)) return PTR_ERR(ctr_name); - cipher_name = crypto_attr_alg_name(tb[2]); - if (IS_ERR(cipher_name)) - return PTR_ERR(cipher_name); - - if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "ccm_base(%s,%s)", - ctr_name, cipher_name) >= CRYPTO_MAX_ALG_NAME) - return -ENAMETOOLONG; + mac_name = crypto_attr_alg_name(tb[2]); + if (IS_ERR(mac_name)) + return PTR_ERR(mac_name); - return crypto_ccm_create_common(tmpl, tb, full_name, ctr_name, - cipher_name); + return crypto_ccm_create_common(tmpl, tb, ctr_name, mac_name); } static int crypto_rfc4309_setkey(struct crypto_aead *parent, const u8 *key, From patchwork Wed Apr 10 06:46:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 10893269 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 661FF17EF for ; Wed, 10 Apr 2019 06:48:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4EDAA289E2 for ; Wed, 10 Apr 2019 06:48:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 40158289E9; Wed, 10 Apr 2019 06:48:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A5E2E289E5 for ; Wed, 10 Apr 2019 06:48:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728695AbfDJGsu (ORCPT ); Wed, 10 Apr 2019 02:48:50 -0400 Received: from mail.kernel.org ([198.145.29.99]:53356 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728706AbfDJGss (ORCPT ); Wed, 10 Apr 2019 02:48:48 -0400 Received: from sol.localdomain (c-24-5-143-220.hsd1.ca.comcast.net [24.5.143.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id AAF6C2083E; Wed, 10 Apr 2019 06:48:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1554878927; bh=BL8h9weJQtcqPlCWvVxrxDOwaLAK/0LPO7irD/2V7xg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SJXohzve7NbD/o+q+RrqunN15lXZUFuubeS1rRKDh4cH0fZI4SOkuPb/1XgLBHyIj ZP0IwgwadL3dWJHkyjoL1cLxYx6vMB7yUXD7mDFYr0nMrZgN4vWo0TsLc5wK1P5RNb oMKSBtVYO0qbetnxnZ4XYUxq5rB6C58K8cp355H4= From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: Daniel Axtens Subject: [PATCH v2 7/7] crypto: vmx - return correct error code on failed setkey Date: Tue, 9 Apr 2019 23:46:35 -0700 Message-Id: <20190410064635.11813-8-ebiggers@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190410064635.11813-1-ebiggers@kernel.org> References: <20190410064635.11813-1-ebiggers@kernel.org> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Eric Biggers In the VMX implementations of AES and AES modes, return -EINVAL when an invalid key length is provided, rather than some unusual error code determined via a series of additions. This makes the behavior match the other AES implementations in the kernel's crypto API. Cc: Daniel Axtens Signed-off-by: Eric Biggers --- drivers/crypto/vmx/aes.c | 7 ++++--- drivers/crypto/vmx/aes_cbc.c | 7 ++++--- drivers/crypto/vmx/aes_ctr.c | 5 +++-- drivers/crypto/vmx/aes_xts.c | 9 +++++---- 4 files changed, 16 insertions(+), 12 deletions(-) diff --git a/drivers/crypto/vmx/aes.c b/drivers/crypto/vmx/aes.c index d7316f7a3a696..b00d6947e02f4 100644 --- a/drivers/crypto/vmx/aes.c +++ b/drivers/crypto/vmx/aes.c @@ -78,13 +78,14 @@ static int p8_aes_setkey(struct crypto_tfm *tfm, const u8 *key, pagefault_disable(); enable_kernel_vsx(); ret = aes_p8_set_encrypt_key(key, keylen * 8, &ctx->enc_key); - ret += aes_p8_set_decrypt_key(key, keylen * 8, &ctx->dec_key); + ret |= aes_p8_set_decrypt_key(key, keylen * 8, &ctx->dec_key); disable_kernel_vsx(); pagefault_enable(); preempt_enable(); - ret += crypto_cipher_setkey(ctx->fallback, key, keylen); - return ret; + ret |= crypto_cipher_setkey(ctx->fallback, key, keylen); + + return ret ? -EINVAL : 0; } static void p8_aes_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) diff --git a/drivers/crypto/vmx/aes_cbc.c b/drivers/crypto/vmx/aes_cbc.c index c5c5ff82b52e0..fbe882ef1bc5d 100644 --- a/drivers/crypto/vmx/aes_cbc.c +++ b/drivers/crypto/vmx/aes_cbc.c @@ -81,13 +81,14 @@ static int p8_aes_cbc_setkey(struct crypto_tfm *tfm, const u8 *key, pagefault_disable(); enable_kernel_vsx(); ret = aes_p8_set_encrypt_key(key, keylen * 8, &ctx->enc_key); - ret += aes_p8_set_decrypt_key(key, keylen * 8, &ctx->dec_key); + ret |= aes_p8_set_decrypt_key(key, keylen * 8, &ctx->dec_key); disable_kernel_vsx(); pagefault_enable(); preempt_enable(); - ret += crypto_sync_skcipher_setkey(ctx->fallback, key, keylen); - return ret; + ret |= crypto_sync_skcipher_setkey(ctx->fallback, key, keylen); + + return ret ? -EINVAL : 0; } static int p8_aes_cbc_encrypt(struct blkcipher_desc *desc, diff --git a/drivers/crypto/vmx/aes_ctr.c b/drivers/crypto/vmx/aes_ctr.c index 8a2fe092cb8e0..214c69db9ebdf 100644 --- a/drivers/crypto/vmx/aes_ctr.c +++ b/drivers/crypto/vmx/aes_ctr.c @@ -83,8 +83,9 @@ static int p8_aes_ctr_setkey(struct crypto_tfm *tfm, const u8 *key, pagefault_enable(); preempt_enable(); - ret += crypto_sync_skcipher_setkey(ctx->fallback, key, keylen); - return ret; + ret |= crypto_sync_skcipher_setkey(ctx->fallback, key, keylen); + + return ret ? -EINVAL : 0; } static void p8_aes_ctr_final(struct p8_aes_ctr_ctx *ctx, diff --git a/drivers/crypto/vmx/aes_xts.c b/drivers/crypto/vmx/aes_xts.c index ecd64e5cc5bbb..5bf4c38566502 100644 --- a/drivers/crypto/vmx/aes_xts.c +++ b/drivers/crypto/vmx/aes_xts.c @@ -86,14 +86,15 @@ static int p8_aes_xts_setkey(struct crypto_tfm *tfm, const u8 *key, pagefault_disable(); enable_kernel_vsx(); ret = aes_p8_set_encrypt_key(key + keylen/2, (keylen/2) * 8, &ctx->tweak_key); - ret += aes_p8_set_encrypt_key(key, (keylen/2) * 8, &ctx->enc_key); - ret += aes_p8_set_decrypt_key(key, (keylen/2) * 8, &ctx->dec_key); + ret |= aes_p8_set_encrypt_key(key, (keylen/2) * 8, &ctx->enc_key); + ret |= aes_p8_set_decrypt_key(key, (keylen/2) * 8, &ctx->dec_key); disable_kernel_vsx(); pagefault_enable(); preempt_enable(); - ret += crypto_sync_skcipher_setkey(ctx->fallback, key, keylen); - return ret; + ret |= crypto_sync_skcipher_setkey(ctx->fallback, key, keylen); + + return ret ? -EINVAL : 0; } static int p8_aes_xts_crypt(struct blkcipher_desc *desc,