From patchwork Sat Aug 8 06:47:28 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Stancek X-Patchwork-Id: 6974001 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Original-To: patchwork-linux-crypto@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 146A2C05AC for ; Sat, 8 Aug 2015 06:48:13 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2236F20221 for ; Sat, 8 Aug 2015 06:48:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 126C720155 for ; Sat, 8 Aug 2015 06:48:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932463AbbHHGrh (ORCPT ); Sat, 8 Aug 2015 02:47:37 -0400 Received: from mx1.redhat.com ([209.132.183.28]:51061 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932289AbbHHGrg (ORCPT ); Sat, 8 Aug 2015 02:47:36 -0400 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) by mx1.redhat.com (Postfix) with ESMTPS id 9BFFE14AA3; Sat, 8 Aug 2015 06:47:35 +0000 (UTC) Received: from dustball.brq.redhat.com (dustball.brq.redhat.com [10.34.26.57]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t786lWYI032455; Sat, 8 Aug 2015 02:47:33 -0400 From: Jan Stancek To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, leosilva@linux.vnet.ibm.com, mhcerri@linux.vnet.ibm.com, fin@linux.vnet.ibm.com, herbert@gondor.apana.org.au, davem@davemloft.net, jstancek@redhat.com Subject: [PATCH] crypto: nx-sha256/512: respect sg limit bounds when building sg lists Date: Sat, 8 Aug 2015 08:47:28 +0200 Message-Id: <58cb150c5cef7194cb3fa8e6117ce3d79c603a58.1439016075.git.jstancek@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Spam-Status: No, score=-7.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Commit 000851119e80 changed sha256/512 update functions to pass more data to nx_build_sg_list(), which ends with sg list overflows and usually with update functions failing for data larger than max_sg_len * NX_PAGE_SIZE. This happens because: - both "total" and "to_process" are updated, which leads to "to_process" getting overflowed for some data lengths For example: In first iteration "total" is 50, and let's assume "to_process" is 30 due to sg limits. At the end of first iteration "total" is set to 20. At start of 2nd iteration "to_process" overflows on: to_process = total - to_process; - "in_sg" is not reset to nx_ctx->in_sg after each iteration - nx_build_sg_list() is hitting overflow because the amount of data passed to it would require more than sgmax elements - as consequence of previous item, data stored in overflowed sg list may no longer be aligned to SHA*_BLOCK_SIZE This patch changes sha256/512 update functions so that "to_process" respects sg limits and never tries to pass more data to nx_build_sg_list() to avoid overflows. "to_process" is calculated as minimum of "total" and sg limits at start of every iteration. Fixes: 000851119e80 ("crypto: nx - Fix SHA concurrence issue and sg limit bounds") Signed-off-by: Jan Stancek Cc: Leonidas Da Silva Barbosa Cc: Marcelo Henrique Cerri Cc: Fionnuala Gunter Cc: Herbert Xu Cc: "David S. Miller" --- drivers/crypto/nx/nx-sha256.c | 27 ++++++++++++++++----------- drivers/crypto/nx/nx-sha512.c | 28 ++++++++++++++++------------ 2 files changed, 32 insertions(+), 23 deletions(-) diff --git a/drivers/crypto/nx/nx-sha256.c b/drivers/crypto/nx/nx-sha256.c index 08f8d5cd6334..becb738c897b 100644 --- a/drivers/crypto/nx/nx-sha256.c +++ b/drivers/crypto/nx/nx-sha256.c @@ -71,7 +71,6 @@ static int nx_sha256_update(struct shash_desc *desc, const u8 *data, struct sha256_state *sctx = shash_desc_ctx(desc); struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base); struct nx_csbcpb *csbcpb = (struct nx_csbcpb *)nx_ctx->csbcpb; - struct nx_sg *in_sg; struct nx_sg *out_sg; u64 to_process = 0, leftover, total; unsigned long irq_flags; @@ -97,7 +96,6 @@ static int nx_sha256_update(struct shash_desc *desc, const u8 *data, NX_CPB_FDM(csbcpb) |= NX_FDM_INTERMEDIATE; NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION; - in_sg = nx_ctx->in_sg; max_sg_len = min_t(u64, nx_ctx->ap->sglen, nx_driver.of.max_sg_len/sizeof(struct nx_sg)); max_sg_len = min_t(u64, max_sg_len, @@ -114,17 +112,12 @@ static int nx_sha256_update(struct shash_desc *desc, const u8 *data, } do { - /* - * to_process: the SHA256_BLOCK_SIZE data chunk to process in - * this update. This value is also restricted by the sg list - * limits. - */ - to_process = total - to_process; - to_process = to_process & ~(SHA256_BLOCK_SIZE - 1); + int used_sgs = 0; + struct nx_sg *in_sg = nx_ctx->in_sg; if (buf_len) { data_len = buf_len; - in_sg = nx_build_sg_list(nx_ctx->in_sg, + in_sg = nx_build_sg_list(in_sg, (u8 *) sctx->buf, &data_len, max_sg_len); @@ -133,15 +126,27 @@ static int nx_sha256_update(struct shash_desc *desc, const u8 *data, rc = -EINVAL; goto out; } + used_sgs = in_sg - nx_ctx->in_sg; } + /* to_process: SHA256_BLOCK_SIZE aligned chunk to be + * processed in this iteration. This value is restricted + * by sg list limits and number of sgs we already used + * for leftover data. (see above) + * In ideal case, we could allow NX_PAGE_SIZE * max_sg_len, + * but because data may not be aligned, we need to account + * for that too. */ + to_process = min_t(u64, total, + (max_sg_len - 1 - used_sgs) * NX_PAGE_SIZE); + to_process = to_process & ~(SHA256_BLOCK_SIZE - 1); + data_len = to_process - buf_len; in_sg = nx_build_sg_list(in_sg, (u8 *) data, &data_len, max_sg_len); nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) * sizeof(struct nx_sg); - to_process = (data_len + buf_len); + to_process = data_len + buf_len; leftover = total - to_process; /* diff --git a/drivers/crypto/nx/nx-sha512.c b/drivers/crypto/nx/nx-sha512.c index aff0fe58eac0..b6e183d58d73 100644 --- a/drivers/crypto/nx/nx-sha512.c +++ b/drivers/crypto/nx/nx-sha512.c @@ -71,7 +71,6 @@ static int nx_sha512_update(struct shash_desc *desc, const u8 *data, struct sha512_state *sctx = shash_desc_ctx(desc); struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base); struct nx_csbcpb *csbcpb = (struct nx_csbcpb *)nx_ctx->csbcpb; - struct nx_sg *in_sg; struct nx_sg *out_sg; u64 to_process, leftover = 0, total; unsigned long irq_flags; @@ -97,7 +96,6 @@ static int nx_sha512_update(struct shash_desc *desc, const u8 *data, NX_CPB_FDM(csbcpb) |= NX_FDM_INTERMEDIATE; NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION; - in_sg = nx_ctx->in_sg; max_sg_len = min_t(u64, nx_ctx->ap->sglen, nx_driver.of.max_sg_len/sizeof(struct nx_sg)); max_sg_len = min_t(u64, max_sg_len, @@ -114,18 +112,12 @@ static int nx_sha512_update(struct shash_desc *desc, const u8 *data, } do { - /* - * to_process: the SHA512_BLOCK_SIZE data chunk to process in - * this update. This value is also restricted by the sg list - * limits. - */ - to_process = total - leftover; - to_process = to_process & ~(SHA512_BLOCK_SIZE - 1); - leftover = total - to_process; + int used_sgs = 0; + struct nx_sg *in_sg = nx_ctx->in_sg; if (buf_len) { data_len = buf_len; - in_sg = nx_build_sg_list(nx_ctx->in_sg, + in_sg = nx_build_sg_list(in_sg, (u8 *) sctx->buf, &data_len, max_sg_len); @@ -133,8 +125,20 @@ static int nx_sha512_update(struct shash_desc *desc, const u8 *data, rc = -EINVAL; goto out; } + used_sgs = in_sg - nx_ctx->in_sg; } + /* to_process: SHA512_BLOCK_SIZE aligned chunk to be + * processed in this iteration. This value is restricted + * by sg list limits and number of sgs we already used + * for leftover data. (see above) + * In ideal case, we could allow NX_PAGE_SIZE * max_sg_len, + * but because data may not be aligned, we need to account + * for that too. */ + to_process = min_t(u64, total, + (max_sg_len - 1 - used_sgs) * NX_PAGE_SIZE); + to_process = to_process & ~(SHA512_BLOCK_SIZE - 1); + data_len = to_process - buf_len; in_sg = nx_build_sg_list(in_sg, (u8 *) data, &data_len, max_sg_len); @@ -146,7 +150,7 @@ static int nx_sha512_update(struct shash_desc *desc, const u8 *data, goto out; } - to_process = (data_len + buf_len); + to_process = data_len + buf_len; leftover = total - to_process; /*