From patchwork Wed Jan 29 14:37:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 11356275 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 18B4E159A for ; Wed, 29 Jan 2020 14:38:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EEBEF20732 for ; Wed, 29 Jan 2020 14:38:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726666AbgA2OiI (ORCPT ); Wed, 29 Jan 2020 09:38:08 -0500 Received: from foss.arm.com ([217.140.110.172]:41960 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726261AbgA2OiI (ORCPT ); Wed, 29 Jan 2020 09:38:08 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E8CD41045; Wed, 29 Jan 2020 06:38:07 -0800 (PST) Received: from e110176-lin.kfn.arm.com (unknown [10.50.4.146]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 49E083F52E; Wed, 29 Jan 2020 06:38:06 -0800 (PST) From: Gilad Ben-Yossef To: Herbert Xu , "David S. Miller" Cc: Ofir Drang , stable@vger.kernel.org, Geert Uytterhoeven , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/4] crypto: ccree - protect against empty or NULL scatterlists Date: Wed, 29 Jan 2020 16:37:54 +0200 Message-Id: <20200129143757.680-2-gilad@benyossef.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200129143757.680-1-gilad@benyossef.com> References: <20200129143757.680-1-gilad@benyossef.com> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Deal gracefully with a NULL or empty scatterlist which can happen if both cryptlen and assoclen are zero and we're doing in-place AEAD encryption. This fixes a crash when this causes us to try and map a NULL page, at least with some platforms / DMA mapping configs. Cc: stable@vger.kernel.org # v4.19+ Reported-by: Geert Uytterhoeven Tested-by: Geert Uytterhoeven Signed-off-by: Gilad Ben-Yossef --- drivers/crypto/ccree/cc_buffer_mgr.c | 62 ++++++++++++---------------- drivers/crypto/ccree/cc_buffer_mgr.h | 1 + 2 files changed, 28 insertions(+), 35 deletions(-) diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c index a72586eccd81..b938ceae7ae7 100644 --- a/drivers/crypto/ccree/cc_buffer_mgr.c +++ b/drivers/crypto/ccree/cc_buffer_mgr.c @@ -87,6 +87,8 @@ static unsigned int cc_get_sgl_nents(struct device *dev, { unsigned int nents = 0; + *lbytes = 0; + while (nbytes && sg_list) { nents++; /* get the number of bytes in the last entry */ @@ -95,6 +97,7 @@ static unsigned int cc_get_sgl_nents(struct device *dev, nbytes : sg_list->length; sg_list = sg_next(sg_list); } + dev_dbg(dev, "nents %d last bytes %d\n", nents, *lbytes); return nents; } @@ -290,37 +293,25 @@ static int cc_map_sg(struct device *dev, struct scatterlist *sg, unsigned int nbytes, int direction, u32 *nents, u32 max_sg_nents, u32 *lbytes, u32 *mapped_nents) { - if (sg_is_last(sg)) { - /* One entry only case -set to DLLI */ - if (dma_map_sg(dev, sg, 1, direction) != 1) { - dev_err(dev, "dma_map_sg() single buffer failed\n"); - return -ENOMEM; - } - dev_dbg(dev, "Mapped sg: dma_address=%pad page=%p addr=%pK offset=%u length=%u\n", - &sg_dma_address(sg), sg_page(sg), sg_virt(sg), - sg->offset, sg->length); - *lbytes = nbytes; - *nents = 1; - *mapped_nents = 1; - } else { /*sg_is_last*/ - *nents = cc_get_sgl_nents(dev, sg, nbytes, lbytes); - if (*nents > max_sg_nents) { - *nents = 0; - dev_err(dev, "Too many fragments. current %d max %d\n", - *nents, max_sg_nents); - return -ENOMEM; - } - /* In case of mmu the number of mapped nents might - * be changed from the original sgl nents - */ - *mapped_nents = dma_map_sg(dev, sg, *nents, direction); - if (*mapped_nents == 0) { - *nents = 0; - dev_err(dev, "dma_map_sg() sg buffer failed\n"); - return -ENOMEM; - } + int ret = 0; + + *nents = cc_get_sgl_nents(dev, sg, nbytes, lbytes); + if (*nents > max_sg_nents) { + *nents = 0; + dev_err(dev, "Too many fragments. current %d max %d\n", + *nents, max_sg_nents); + return -ENOMEM; } + ret = dma_map_sg(dev, sg, *nents, direction); + if (dma_mapping_error(dev, ret)) { + *nents = 0; + dev_err(dev, "dma_map_sg() sg buffer failed %d\n", ret); + return -ENOMEM; + } + + *mapped_nents = ret; + return 0; } @@ -555,11 +546,12 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req) sg_virt(req->src), areq_ctx->src.nents, areq_ctx->assoc.nents, areq_ctx->assoclen, req->cryptlen); - dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_BIDIRECTIONAL); + dma_unmap_sg(dev, req->src, areq_ctx->src.mapped_nents, + DMA_BIDIRECTIONAL); if (req->src != req->dst) { dev_dbg(dev, "Unmapping dst sgl: req->dst=%pK\n", sg_virt(req->dst)); - dma_unmap_sg(dev, req->dst, sg_nents(req->dst), + dma_unmap_sg(dev, req->dst, areq_ctx->dst.mapped_nents, DMA_BIDIRECTIONAL); } if (drvdata->coherent && @@ -881,7 +873,7 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata, &src_last_bytes); sg_index = areq_ctx->src_sgl->length; //check where the data starts - while (sg_index <= size_to_skip) { + while (src_mapped_nents && (sg_index <= size_to_skip)) { src_mapped_nents--; offset -= areq_ctx->src_sgl->length; sgl = sg_next(areq_ctx->src_sgl); @@ -908,7 +900,7 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata, size_for_map += crypto_aead_ivsize(tfm); rc = cc_map_sg(dev, req->dst, size_for_map, DMA_BIDIRECTIONAL, - &areq_ctx->dst.nents, + &areq_ctx->dst.mapped_nents, LLI_MAX_NUM_OF_DATA_ENTRIES, &dst_last_bytes, &dst_mapped_nents); if (rc) @@ -921,7 +913,7 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata, offset = size_to_skip; //check where the data starts - while (sg_index <= size_to_skip) { + while (dst_mapped_nents && sg_index <= size_to_skip) { dst_mapped_nents--; offset -= areq_ctx->dst_sgl->length; sgl = sg_next(areq_ctx->dst_sgl); @@ -1123,7 +1115,7 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req) if (is_gcm4543) size_to_map += crypto_aead_ivsize(tfm); rc = cc_map_sg(dev, req->src, size_to_map, DMA_BIDIRECTIONAL, - &areq_ctx->src.nents, + &areq_ctx->src.mapped_nents, (LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES + LLI_MAX_NUM_OF_DATA_ENTRIES), &dummy, &mapped_nents); diff --git a/drivers/crypto/ccree/cc_buffer_mgr.h b/drivers/crypto/ccree/cc_buffer_mgr.h index af434872c6ff..827b6cb1236e 100644 --- a/drivers/crypto/ccree/cc_buffer_mgr.h +++ b/drivers/crypto/ccree/cc_buffer_mgr.h @@ -25,6 +25,7 @@ enum cc_sg_cpy_direct { struct cc_mlli { cc_sram_addr_t sram_addr; + unsigned int mapped_nents; unsigned int nents; //sg nents unsigned int mlli_nents; //mlli nents might be different than the above }; From patchwork Wed Jan 29 14:37:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 11356273 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 848DB112B for ; Wed, 29 Jan 2020 14:38:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5C4AA20732 for ; Wed, 29 Jan 2020 14:38:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726715AbgA2OiL (ORCPT ); Wed, 29 Jan 2020 09:38:11 -0500 Received: from foss.arm.com ([217.140.110.172]:41970 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726261AbgA2OiK (ORCPT ); Wed, 29 Jan 2020 09:38:10 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 81E9E31B; Wed, 29 Jan 2020 06:38:10 -0800 (PST) Received: from e110176-lin.kfn.arm.com (unknown [10.50.4.146]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DDA743F52E; Wed, 29 Jan 2020 06:38:08 -0800 (PST) From: Gilad Ben-Yossef To: Herbert Xu , "David S. Miller" Cc: Ofir Drang , Geert Uytterhoeven , stable@vger.kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/4] crypto: ccree - only try to map auth tag if needed Date: Wed, 29 Jan 2020 16:37:55 +0200 Message-Id: <20200129143757.680-3-gilad@benyossef.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200129143757.680-1-gilad@benyossef.com> References: <20200129143757.680-1-gilad@benyossef.com> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Make sure to only add the size of the auth tag to the source mapping for encryption if it is an in-place operation. Failing to do this previously caused us to try and map auth size len bytes from a NULL mapping and crashing if both the cryptlen and assoclen are zero. Reported-by: Geert Uytterhoeven Tested-by: Geert Uytterhoeven Signed-off-by: Gilad Ben-Yossef Cc: stable@vger.kernel.org # v4.19+ --- drivers/crypto/ccree/cc_buffer_mgr.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c index b938ceae7ae7..885347b5b372 100644 --- a/drivers/crypto/ccree/cc_buffer_mgr.c +++ b/drivers/crypto/ccree/cc_buffer_mgr.c @@ -1109,9 +1109,11 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req) } size_to_map = req->cryptlen + areq_ctx->assoclen; - if (areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_ENCRYPT) + /* If we do in-place encryption, we also need the auth tag */ + if ((areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_ENCRYPT) && + (req->src == req->dst)) { size_to_map += authsize; - + } if (is_gcm4543) size_to_map += crypto_aead_ivsize(tfm); rc = cc_map_sg(dev, req->src, size_to_map, DMA_BIDIRECTIONAL, From patchwork Wed Jan 29 14:37:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 11356271 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CD2731395 for ; Wed, 29 Jan 2020 14:38:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A8CFD20732 for ; Wed, 29 Jan 2020 14:38:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726760AbgA2OiN (ORCPT ); Wed, 29 Jan 2020 09:38:13 -0500 Received: from foss.arm.com ([217.140.110.172]:41980 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726756AbgA2OiN (ORCPT ); Wed, 29 Jan 2020 09:38:13 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C045A1045; Wed, 29 Jan 2020 06:38:12 -0800 (PST) Received: from e110176-lin.kfn.arm.com (unknown [10.50.4.146]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7BB163F52E; Wed, 29 Jan 2020 06:38:11 -0800 (PST) From: Gilad Ben-Yossef To: Herbert Xu , "David S. Miller" Cc: Ofir Drang , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/4] crypto: ccree - fix some reported cipher block sizes Date: Wed, 29 Jan 2020 16:37:56 +0200 Message-Id: <20200129143757.680-4-gilad@benyossef.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200129143757.680-1-gilad@benyossef.com> References: <20200129143757.680-1-gilad@benyossef.com> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org OFB and CTR modes block sizes were wrongfully reported as the underlying block sizes. Fix it to 1 bytes as they turn the block ciphers into stream ciphers. Also document why our XTS differes from the generic implementation. Signed-off-by: Gilad Ben-Yossef --- drivers/crypto/ccree/cc_cipher.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c index c08dee04941b..73457548ee92 100644 --- a/drivers/crypto/ccree/cc_cipher.c +++ b/drivers/crypto/ccree/cc_cipher.c @@ -1236,6 +1236,10 @@ static const struct cc_alg_template skcipher_algs[] = { .sec_func = true, }, { + /* See https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg40576.html + * for the reason why this differs from the generic + * implementation. + */ .name = "xts(aes)", .driver_name = "xts-aes-ccree", .blocksize = 1, @@ -1431,7 +1435,7 @@ static const struct cc_alg_template skcipher_algs[] = { { .name = "ofb(aes)", .driver_name = "ofb-aes-ccree", - .blocksize = AES_BLOCK_SIZE, + .blocksize = 1, .template_skcipher = { .setkey = cc_cipher_setkey, .encrypt = cc_cipher_encrypt, @@ -1584,7 +1588,7 @@ static const struct cc_alg_template skcipher_algs[] = { { .name = "ctr(sm4)", .driver_name = "ctr-sm4-ccree", - .blocksize = SM4_BLOCK_SIZE, + .blocksize = 1, .template_skcipher = { .setkey = cc_cipher_setkey, .encrypt = cc_cipher_encrypt, From patchwork Wed Jan 29 14:37:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 11356269 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 497101395 for ; Wed, 29 Jan 2020 14:38:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2E75720732 for ; Wed, 29 Jan 2020 14:38:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726756AbgA2OiP (ORCPT ); Wed, 29 Jan 2020 09:38:15 -0500 Received: from foss.arm.com ([217.140.110.172]:41988 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726773AbgA2OiP (ORCPT ); Wed, 29 Jan 2020 09:38:15 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0269A31B; Wed, 29 Jan 2020 06:38:15 -0800 (PST) Received: from e110176-lin.kfn.arm.com (unknown [10.50.4.146]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B52D73F52E; Wed, 29 Jan 2020 06:38:13 -0800 (PST) From: Gilad Ben-Yossef To: Herbert Xu , "David S. Miller" Cc: Ofir Drang , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/4] crypto: ccree - fix AEAD blocksize registration Date: Wed, 29 Jan 2020 16:37:57 +0200 Message-Id: <20200129143757.680-5-gilad@benyossef.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200129143757.680-1-gilad@benyossef.com> References: <20200129143757.680-1-gilad@benyossef.com> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Fix an error causing no block sizes to be reported during all AEAD registrations. Signed-off-by: Gilad Ben-Yossef Tested-by: Geert Uytterhoeven --- drivers/crypto/ccree/cc_aead.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/crypto/ccree/cc_aead.c b/drivers/crypto/ccree/cc_aead.c index edab44cbd353..b69cfbebc59a 100644 --- a/drivers/crypto/ccree/cc_aead.c +++ b/drivers/crypto/ccree/cc_aead.c @@ -2642,6 +2642,7 @@ static struct cc_crypto_alg *cc_create_aead_alg(struct cc_alg_template *tmpl, alg->base.cra_ctxsize = sizeof(struct cc_aead_ctx); alg->base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY; + alg->base.cra_blocksize = tmpl->blocksize; alg->init = cc_aead_init; alg->exit = cc_aead_exit;