From patchwork Wed Feb 1 12:31:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "tianjia.zhang" X-Patchwork-Id: 13124209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CA299C05027 for ; Wed, 1 Feb 2023 12:32:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-Id:Date:Subject:Cc :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=NBLaKd3uuPfgjAVG41dZBmmFXlzexLuI7LHfA34NQSk=; b=Bj0Aheyp5FJZW+ 5R2L5D4Wsn3vpnezoeaSbzXaa/BPXxmnfzOGGLzjmmbeH3ffWZIr73gMb7TyBNh7xD+Oa0npBifm6 6/NS8FMWiWCLz6qJlWvuCgacsULjL/OxFyZMLBzZoHqASRe8mB8paZDAvMo3TNfEO5F7GlMiA7OTq LpgDp5rrfqzmI0aD1E9fiLlNbE7vMWVcyGmVMqOjT5HrDp41F76+heBTjx/VJvEhb6C7sgwOp9BAv yhoMpDZ1rAqQdlGJsFNXYLmmLRGQZG+QeQJ57YZ64eOCMkpr+SMbam6clnv0IMY3/uQ6v3Q861mes x4KP4haUUxoc7ndMNgyA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCHD-00BjIX-29; Wed, 01 Feb 2023 12:31:47 +0000 Received: from out30-101.freemail.mail.aliyun.com ([115.124.30.101]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCH9-00BjHP-5V for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:31:45 +0000 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R381e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046056;MF=tianjia.zhang@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VagnY4o_1675254694; Received: from localhost(mailfrom:tianjia.zhang@linux.alibaba.com fp:SMTPD_---0VagnY4o_1675254694) by smtp.aliyun-inc.com; Wed, 01 Feb 2023 20:31:35 +0800 From: Tianjia Zhang To: Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , linux-crypto@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: Tianjia Zhang Subject: [PATCH v3] crypto: arm64/sm4-gcm - Fix possible crash in GCM cryption Date: Wed, 1 Feb 2023 20:31:33 +0800 Message-Id: <20230201123133.99768-1-tianjia.zhang@linux.alibaba.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_043143_841782_269BEFDA X-CRM114-Status: GOOD ( 12.42 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When the cryption total length is zero, GCM cryption call skcipher_walk_done() will cause an unexpected crash, so skip calling this function to avoid possible crash when the GCM cryption length is equal to zero. This patch also rewrite the skcipher walker loop, and separates the cryption of the last chunk from the walker loop. In addition to following the usual convention of checking walk->nbytes, it also makes the execution logic of the loop clearer and easier to understand. Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode") Signed-off-by: Tianjia Zhang --- arch/arm64/crypto/sm4-ce-gcm-glue.c | 43 ++++++++++++++--------------- 1 file changed, 20 insertions(+), 23 deletions(-) diff --git a/arch/arm64/crypto/sm4-ce-gcm-glue.c b/arch/arm64/crypto/sm4-ce-gcm-glue.c index c450a2025ca9..80ac4e94a90d 100644 --- a/arch/arm64/crypto/sm4-ce-gcm-glue.c +++ b/arch/arm64/crypto/sm4-ce-gcm-glue.c @@ -143,7 +143,7 @@ static int gcm_crypt(struct aead_request *req, struct skcipher_walk *walk, { u8 __aligned(8) iv[SM4_BLOCK_SIZE]; be128 __aligned(8) lengths; - int err; + int err = 0; memset(ghash, 0, SM4_BLOCK_SIZE); @@ -158,34 +158,31 @@ static int gcm_crypt(struct aead_request *req, struct skcipher_walk *walk, if (req->assoclen) gcm_calculate_auth_mac(req, ghash); - do { + while (walk->nbytes && walk->nbytes != walk->total) { unsigned int tail = walk->nbytes % SM4_BLOCK_SIZE; - const u8 *src = walk->src.virt.addr; - u8 *dst = walk->dst.virt.addr; - - if (walk->nbytes == walk->total) { - tail = 0; - - sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, - walk->nbytes, ghash, - ctx->ghash_table, - (const u8 *)&lengths); - } else if (walk->nbytes - tail) { - sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, - walk->nbytes - tail, ghash, - ctx->ghash_table, NULL); - } + + sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, walk->dst.virt.addr, + walk->src.virt.addr, iv, + walk->nbytes - tail, ghash, + ctx->ghash_table, NULL); kernel_neon_end(); err = skcipher_walk_done(walk, tail); - if (err) - return err; - if (walk->nbytes) - kernel_neon_begin(); - } while (walk->nbytes > 0); - return 0; + kernel_neon_begin(); + } + + sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, walk->dst.virt.addr, + walk->src.virt.addr, iv, walk->nbytes, ghash, + ctx->ghash_table, (const u8 *)&lengths); + + kernel_neon_end(); + + if (walk->nbytes) + err = skcipher_walk_done(walk, 0); + + return err; } static int gcm_encrypt(struct aead_request *req)