From patchwork Thu May 24 11:56:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harsh Jain X-Patchwork-Id: 10424517 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8C52E60327 for ; Thu, 24 May 2018 11:57:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 748AF29441 for ; Thu, 24 May 2018 11:57:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6929329460; Thu, 24 May 2018 11:57:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BE25A2945F for ; Thu, 24 May 2018 11:57:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966369AbeEXL5C (ORCPT ); Thu, 24 May 2018 07:57:02 -0400 Received: from stargate.chelsio.com ([12.32.117.8]:24313 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966587AbeEXL47 (ORCPT ); Thu, 24 May 2018 07:56:59 -0400 Received: from heptagon.asicdesigners.com (heptagon.blr.asicdesigners.com [10.193.186.108]) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id w4OBulN9016090; Thu, 24 May 2018 04:56:52 -0700 From: Harsh Jain To: linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, atul.gupta@chelsio.com, indranil@chelsio.com Cc: Harsh Jain Subject: [PATCH 2/3] crypt:chelsio:Send IV as Immediate for cipher algo Date: Thu, 24 May 2018 17:26:38 +0530 Message-Id: <734ac569e668285b02c606a4f58e4c4714664c6f.1527159542.git.harsh@chelsio.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: References: In-Reply-To: References: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Send IV in WR as immediate instead of dma mapped entry for cipher. Signed-off-by: Harsh Jain --- drivers/crypto/chelsio/chcr_algo.c | 49 +++++++++++------------------------- drivers/crypto/chelsio/chcr_algo.h | 3 +-- drivers/crypto/chelsio/chcr_core.h | 2 +- drivers/crypto/chelsio/chcr_crypto.h | 3 +-- 4 files changed, 17 insertions(+), 40 deletions(-) diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c index db15a9f..b2bfeb2 100644 --- a/drivers/crypto/chelsio/chcr_algo.c +++ b/drivers/crypto/chelsio/chcr_algo.c @@ -638,7 +638,6 @@ static int chcr_sg_ent_in_wr(struct scatterlist *src, src = sg_next(src); srcskip = 0; } - if (sg_dma_len(dst) == dstskip) { dst = sg_next(dst); dstskip = 0; @@ -761,13 +760,13 @@ static struct sk_buff *create_cipher_wr(struct cipher_wr_param *wrparam) nents = sg_nents_xlen(reqctx->dstsg, wrparam->bytes, CHCR_DST_SG_SIZE, reqctx->dst_ofst); - dst_size = get_space_for_phys_dsgl(nents + 1); + dst_size = get_space_for_phys_dsgl(nents); kctx_len = roundup(ablkctx->enckey_len, 16); transhdr_len = CIPHER_TRANSHDR_SIZE(kctx_len, dst_size); nents = sg_nents_xlen(reqctx->srcsg, wrparam->bytes, CHCR_SRC_SG_SIZE, reqctx->src_ofst); - temp = reqctx->imm ? roundup(IV + wrparam->req->nbytes, 16) : - (sgl_len(nents + MIN_CIPHER_SG) * 8); + temp = reqctx->imm ? roundup(wrparam->bytes, 16) : + (sgl_len(nents) * 8); transhdr_len += temp; transhdr_len = roundup(transhdr_len, 16); skb = alloc_skb(SGE_MAX_WR_LEN, flags); @@ -789,7 +788,7 @@ static struct sk_buff *create_cipher_wr(struct cipher_wr_param *wrparam) ablkctx->ciph_mode, 0, 0, IV >> 1); chcr_req->sec_cpl.ivgen_hdrlen = FILL_SEC_CPL_IVGEN_HDRLEN(0, 0, 0, - 0, 0, dst_size); + 0, 1, dst_size); chcr_req->key_ctx.ctx_hdr = ablkctx->key_ctx_hdr; if ((reqctx->op == CHCR_DECRYPT_OP) && @@ -819,8 +818,8 @@ static struct sk_buff *create_cipher_wr(struct cipher_wr_param *wrparam) chcr_add_cipher_dst_ent(wrparam->req, phys_cpl, wrparam, wrparam->qid); atomic_inc(&adap->chcr_stats.cipher_rqst); - temp = sizeof(struct cpl_rx_phys_dsgl) + dst_size + kctx_len - +(reqctx->imm ? (IV + wrparam->bytes) : 0); + temp = sizeof(struct cpl_rx_phys_dsgl) + dst_size + kctx_len + IV + + (reqctx->imm ? (wrparam->bytes) : 0); create_wreq(c_ctx(tfm), chcr_req, &(wrparam->req->base), reqctx->imm, 0, transhdr_len, temp, ablkctx->ciph_mode == CHCR_SCMD_CIPHER_MODE_AES_CBC); @@ -1023,7 +1022,7 @@ static int chcr_update_tweak(struct ablkcipher_request *req, u8 *iv, ret = crypto_cipher_setkey(cipher, key, keylen); if (ret) goto out; - /*H/W sends the encrypted IV in dsgl when AADIVDROP bit is 0*/ + crypto_cipher_encrypt_one(cipher, iv, iv); for (i = 0; i < round8; i++) gf128mul_x8_ble((le128 *)iv, (le128 *)iv); @@ -1115,7 +1114,7 @@ static int chcr_handle_cipher_resp(struct ablkcipher_request *req, } if (!reqctx->imm) { - bytes = chcr_sg_ent_in_wr(reqctx->srcsg, reqctx->dstsg, 1, + bytes = chcr_sg_ent_in_wr(reqctx->srcsg, reqctx->dstsg, 0, CIP_SPACE_LEFT(ablkctx->enckey_len), reqctx->src_ofst, reqctx->dst_ofst); if ((bytes + reqctx->processed) >= req->nbytes) @@ -1126,11 +1125,7 @@ static int chcr_handle_cipher_resp(struct ablkcipher_request *req, /*CTR mode counter overfloa*/ bytes = req->nbytes - reqctx->processed; } - dma_sync_single_for_cpu(&ULD_CTX(c_ctx(tfm))->lldi.pdev->dev, - reqctx->iv_dma, IV, DMA_BIDIRECTIONAL); err = chcr_update_cipher_iv(req, fw6_pld, reqctx->iv); - dma_sync_single_for_device(&ULD_CTX(c_ctx(tfm))->lldi.pdev->dev, - reqctx->iv_dma, IV, DMA_BIDIRECTIONAL); if (err) goto unmap; @@ -1205,7 +1200,6 @@ static int process_cipher(struct ablkcipher_request *req, dnents = sg_nents_xlen(req->dst, req->nbytes, CHCR_DST_SG_SIZE, 0); - dnents += 1; // IV phys_dsgl = get_space_for_phys_dsgl(dnents); kctx_len = roundup(ablkctx->enckey_len, 16); transhdr_len = CIPHER_TRANSHDR_SIZE(kctx_len, phys_dsgl); @@ -1218,8 +1212,7 @@ static int process_cipher(struct ablkcipher_request *req, } if (!reqctx->imm) { - bytes = chcr_sg_ent_in_wr(req->src, req->dst, - MIN_CIPHER_SG, + bytes = chcr_sg_ent_in_wr(req->src, req->dst, 0, CIP_SPACE_LEFT(ablkctx->enckey_len), 0, 0); if ((bytes + reqctx->processed) >= req->nbytes) @@ -2516,22 +2509,20 @@ void chcr_add_aead_dst_ent(struct aead_request *req, } void chcr_add_cipher_src_ent(struct ablkcipher_request *req, - struct ulptx_sgl *ulptx, + void *ulptx, struct cipher_wr_param *wrparam) { struct ulptx_walk ulp_walk; struct chcr_blkcipher_req_ctx *reqctx = ablkcipher_request_ctx(req); + u8 *buf = ulptx; + memcpy(buf, reqctx->iv, IV); + buf += IV; if (reqctx->imm) { - u8 *buf = (u8 *)ulptx; - - memcpy(buf, reqctx->iv, IV); - buf += IV; sg_pcopy_to_buffer(req->src, sg_nents(req->src), buf, wrparam->bytes, reqctx->processed); } else { - ulptx_walk_init(&ulp_walk, ulptx); - ulptx_walk_add_page(&ulp_walk, IV, &reqctx->iv_dma); + ulptx_walk_init(&ulp_walk, (struct ulptx_sgl *)buf); ulptx_walk_add_sg(&ulp_walk, reqctx->srcsg, wrparam->bytes, reqctx->src_ofst); reqctx->srcsg = ulp_walk.last_sg; @@ -2549,7 +2540,6 @@ void chcr_add_cipher_dst_ent(struct ablkcipher_request *req, struct dsgl_walk dsgl_walk; dsgl_walk_init(&dsgl_walk, phys_cpl); - dsgl_walk_add_page(&dsgl_walk, IV, &reqctx->iv_dma); dsgl_walk_add_sg(&dsgl_walk, reqctx->dstsg, wrparam->bytes, reqctx->dst_ofst); reqctx->dstsg = dsgl_walk.last_sg; @@ -2623,12 +2613,6 @@ int chcr_cipher_dma_map(struct device *dev, struct ablkcipher_request *req) { int error; - struct chcr_blkcipher_req_ctx *reqctx = ablkcipher_request_ctx(req); - - reqctx->iv_dma = dma_map_single(dev, reqctx->iv, IV, - DMA_BIDIRECTIONAL); - if (dma_mapping_error(dev, reqctx->iv_dma)) - return -ENOMEM; if (req->src == req->dst) { error = dma_map_sg(dev, req->src, sg_nents(req->src), @@ -2651,17 +2635,12 @@ int chcr_cipher_dma_map(struct device *dev, return 0; err: - dma_unmap_single(dev, reqctx->iv_dma, IV, DMA_BIDIRECTIONAL); return -ENOMEM; } void chcr_cipher_dma_unmap(struct device *dev, struct ablkcipher_request *req) { - struct chcr_blkcipher_req_ctx *reqctx = ablkcipher_request_ctx(req); - - dma_unmap_single(dev, reqctx->iv_dma, IV, - DMA_BIDIRECTIONAL); if (req->src == req->dst) { dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_BIDIRECTIONAL); diff --git a/drivers/crypto/chelsio/chcr_algo.h b/drivers/crypto/chelsio/chcr_algo.h index dba3dff..1871500 100644 --- a/drivers/crypto/chelsio/chcr_algo.h +++ b/drivers/crypto/chelsio/chcr_algo.h @@ -146,7 +146,7 @@ kctx_len) #define CIPHER_TRANSHDR_SIZE(kctx_len, sge_pairs) \ (TRANSHDR_SIZE((kctx_len)) + (sge_pairs) +\ - sizeof(struct cpl_rx_phys_dsgl)) + sizeof(struct cpl_rx_phys_dsgl) + AES_BLOCK_SIZE) #define HASH_TRANSHDR_SIZE(kctx_len)\ (TRANSHDR_SIZE(kctx_len) + DUMMY_BYTES) @@ -259,7 +259,6 @@ ULP_TX_SC_MORE_V((immdatalen))) #define MAX_NK 8 #define MAX_DSGL_ENT 32 -#define MIN_CIPHER_SG 1 /* IV */ #define MIN_AUTH_SG 1 /* IV */ #define MIN_GCM_SG 1 /* IV */ #define MIN_DIGEST_SG 1 /*Partial Buffer*/ diff --git a/drivers/crypto/chelsio/chcr_core.h b/drivers/crypto/chelsio/chcr_core.h index 1a20424..de3a9c0 100644 --- a/drivers/crypto/chelsio/chcr_core.h +++ b/drivers/crypto/chelsio/chcr_core.h @@ -56,7 +56,7 @@ #define MAX_SALT 4 #define CIP_WR_MIN_LEN (sizeof(struct chcr_wr) + \ sizeof(struct cpl_rx_phys_dsgl) + \ - sizeof(struct ulptx_sgl)) + sizeof(struct ulptx_sgl) + 16) //IV #define HASH_WR_MIN_LEN (sizeof(struct chcr_wr) + \ DUMMY_BYTES + \ diff --git a/drivers/crypto/chelsio/chcr_crypto.h b/drivers/crypto/chelsio/chcr_crypto.h index c8e8972..97878d4 100644 --- a/drivers/crypto/chelsio/chcr_crypto.h +++ b/drivers/crypto/chelsio/chcr_crypto.h @@ -295,7 +295,6 @@ struct chcr_blkcipher_req_ctx { unsigned int src_ofst; unsigned int dst_ofst; unsigned int op; - dma_addr_t iv_dma; u16 imm; u8 iv[CHCR_MAX_CRYPTO_IV_LEN]; }; @@ -327,7 +326,7 @@ void chcr_add_aead_dst_ent(struct aead_request *req, void chcr_add_aead_src_ent(struct aead_request *req, struct ulptx_sgl *ulptx, unsigned int assoclen, unsigned short op_type); void chcr_add_cipher_src_ent(struct ablkcipher_request *req, - struct ulptx_sgl *ulptx, + void *ulptx, struct cipher_wr_param *wrparam); int chcr_cipher_dma_map(struct device *dev, struct ablkcipher_request *req); void chcr_cipher_dma_unmap(struct device *dev, struct ablkcipher_request *req);