From patchwork Wed Mar 28 12:39:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Horia Geanta X-Patchwork-Id: 10313621 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0AB9D60353 for ; Wed, 28 Mar 2018 16:59:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EE31E28CB4 for ; Wed, 28 Mar 2018 16:59:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E1E492949B; Wed, 28 Mar 2018 16:59:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A7EAD28CB4 for ; Wed, 28 Mar 2018 16:59:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752886AbeC1Q7M (ORCPT ); Wed, 28 Mar 2018 12:59:12 -0400 Received: from mail-bl2nam02on0046.outbound.protection.outlook.com ([104.47.38.46]:59840 "EHLO NAM02-BL2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752067AbeC1Qzr (ORCPT ); Wed, 28 Mar 2018 12:55:47 -0400 Received: from CY4PR03CA0019.namprd03.prod.outlook.com (2603:10b6:903:33::29) by BY2PR03MB393.namprd03.prod.outlook.com (2a01:111:e400:2c37::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.609.10; Wed, 28 Mar 2018 16:55:44 +0000 Received: from BN1AFFO11FD010.protection.gbl (2a01:111:f400:7c10::169) by CY4PR03CA0019.outlook.office365.com (2603:10b6:903:33::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.609.10 via Frontend Transport; Wed, 28 Mar 2018 16:55:44 +0000 Received-SPF: Fail (protection.outlook.com: domain of nxp.com does not designate 192.88.168.50 as permitted sender) receiver=protection.outlook.com; client-ip=192.88.168.50; helo=tx30smr01.am.freescale.net; Received: from tx30smr01.am.freescale.net (192.88.168.50) by BN1AFFO11FD010.mail.protection.outlook.com (10.58.52.70) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.20.527.18 via Frontend Transport; Wed, 28 Mar 2018 16:55:43 +0000 Received: from enigma.ea.freescale.net (enigma.ea.freescale.net [10.171.81.110]) by tx30smr01.am.freescale.net (8.14.3/8.14.0) with ESMTP id w2SCdcTB007021; Wed, 28 Mar 2018 05:39:46 -0700 From: =?UTF-8?q?Horia=20Geant=C4=83?= To: Herbert Xu CC: "David S. Miller" , , Aymen Sghaier , Gilad Ben-Yossef , David Gstir Subject: [PATCH 3/3] crypto: caam/qi - fix IV DMA mapping and updating Date: Wed, 28 Mar 2018 15:39:19 +0300 Message-ID: <20180328123919.24120-4-horia.geanta@nxp.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20180328123919.24120-1-horia.geanta@nxp.com> References: <20180328123919.24120-1-horia.geanta@nxp.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-Matching-Connectors: 131667297442070786; (91ab9b29-cfa4-454e-5278-08d120cd25b8); () X-Forefront-Antispam-Report: CIP:192.88.168.50; IPV:CAL; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(346002)(376002)(396003)(39380400002)(39860400002)(2980300002)(1109001)(1110001)(339900001)(199004)(189003)(5820100001)(105606002)(4326008)(97736004)(106466001)(6862004)(86362001)(53936002)(1076002)(68736007)(50226002)(126002)(446003)(356003)(11346002)(305945005)(104016004)(476003)(486005)(486005)(498600001)(5660300001)(2616005)(76176011)(59450400001)(336012)(50466002)(26826003)(966005)(23676004)(316002)(47776003)(8676002)(54906003)(36756003)(26005)(6666003)(6346003)(77096007)(6306002)(2906002)(2870700001)(8936002)(81166006)(81156014)(107886003)(309714004); DIR:OUT; SFP:1101; SCL:1; SRVR:BY2PR03MB393; H:tx30smr01.am.freescale.net; FPR:; SPF:Fail; PTR:InfoDomainNonexistent; A:1; MX:1; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; BN1AFFO11FD010; 1:kgplRecGj1eH5KLoGqbZv76kMqC8F0QptWx22kTANA46wCxIEcC4X8r8lFRZ7qIsfoNfRn4VNdOlmumJEsEYpJTVir2iBCs8TyJwDoCjjQQ3YHkcoYYtU7lHHEB3JmEY X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: dd1b7373-75e2-4305-9fa1-08d594ccbf06 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020095)(5600026)(4604075)(2017052603328); SRVR:BY2PR03MB393; X-Microsoft-Exchange-Diagnostics: 1; BY2PR03MB393; 3:40ZbLG6DrPbrebLg0ZQ62r0k3nVN9Mai67Bp8H7pfKNwOdmfhpQBcK2/N3B8G1DFkFQUoi3pouaQ5l4FESAPxgVIKv+BKWNWmUiEHbz35LB8QGi1PiZQrVwJX59jsD6cKmDaIvaOTWzdX+UMO4lLnT1d6Jfs+FHcqEqbNWPRCSObbAwqLVo0RSRhoJR9r+Uq65c5XKhlQQaiHB8n12xXscv4ss1gWEha4ERrN8mgqcm4MEededy7ueeu6K1fvbDUp7nEvdMeNLfPUHzNHw6d5wjDvmMSOjR+syzxkzo07dewRl6Wng40Tri9EIA1zObP84CpwRGUKL13BYbDS0C8zlT2WEizePc1RwCBK5wPB7I=; 25:YHMQzdxZn5VsbCHH6KYkryY9ff25wQ51CD9VYjNDCZ7RCSgw9eNWCbEqqjjL8BBXvZCzD1YtDlC8HZzQcYwFcbObtWQz26bLaMBaQMDqgoXaALwtfovRh2XucaHB3xbOjaMYQ3RUqJ0BBDoDOhZzvjyLPIkuPOKJ7Cy9NoAz/OSzFFUFdVUSl79yiiLgunYoViE0U9TC6SQE55FcC4ivCjccYe5YJ7wBAsAO2Qavfay4VcsWROlanb2CgLQpdmhDWEPGlCEX3T0LigZdFdfd5il9RxTsJLqhc1hLIj8ZgfVtt0Mc3mB8RW0VU2cWnkS+2fjb5ENMUsbU2oyoRP1sww== X-MS-TrafficTypeDiagnostic: BY2PR03MB393: X-Microsoft-Exchange-Diagnostics: 1; BY2PR03MB393; 31:QK/zFQFpNh/PZTWQQR4q1yDoyAWY++4LscB1Xows4B2+ieOsxEHAadctAra1m3Jk3i2blHfbIvRVqEuhhTV7vOsC9XeP1dY4c2zNgm8X521pHz7/CXER6U0DSC4NNFmozIbFXnTPomvUPYNlswJFwH8Dv/l3w/sPBVELw/mrLpXlTeGCnuraNvoDkzntqs2c9s25WXXLe0003hwvhgj+W23cN+Eie6XI2r4uWFj/C7o=; 4:pEvX3hYZvW7wAFi0ChoD6u/Em7+HAR61G8V0aYFv3UENQKdckueFWBqmTyRjL6YQh/bMDkyB9S/h5qDqTjY3Frm6j+cGjSRYbKv5FzJ8Xv5dEigmDPcnoATTWUmR9KQhiNoz7LvFCB4w3sGfdkHqMmrkV1uzy7GGsV1IYCfp+eoDgKGMHxbtO5f2Psgu6xvvHgfnZcMe/GAbLr6beqmQqVDTr3nqQiZ/NKHIdtWNPLYymJ2U5oDTdS5p47FZaGS5jjgeTr/jpqD5yr1bm4X2XjsCuUmXTGreS8qSsRiZ7Z+Z+SRjE8OuYMMMh3R8oTUY5xPdu711/ujMnhcrIZV/LEtYWsXvdn3an1G16bY83LpYnmjTeqZkSxF5nKeUEGNI X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(9452136761055)(185117386973197)(42068640409301); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6095135)(2401047)(5005006)(8121501046)(10201501046)(3002001)(93006095)(93001095)(3231221)(944501327)(52105095)(6055026)(6096035)(20161123559100)(20161123556025)(20161123561025)(201703131430075)(201703131448075)(201703131433075)(201703151042153)(20161123565025)(20161123563025)(201708071742011); SRVR:BY2PR03MB393; BCL:0; PCL:0; RULEID:(400006); SRVR:BY2PR03MB393; X-Forefront-PRVS: 06259BA5A2 X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtCWTJQUjAzTUIzOTM7MjM6clhubjRjWU9FT3ZOd0dmeUtrdGhQdmk4SlVG?= =?utf-8?B?U1duTENqcGh1Umk3Q1Y0eG53ZHlJSDgyUDhMSndQTXh1S1FnNmtJaEVUaE9R?= =?utf-8?B?dUlzZTEwMklSQ1I5MHczYVJVMEZhMzVEZ0hhUWZ0WVZUVFVHRHV3MjNORkt1?= =?utf-8?B?RzMyNThSOGFlVjFIRXBGa2w4MEVyTFoxRE5CQXMxZUZNanFBNHRmWnhicDhj?= =?utf-8?B?S29FcFVtZ1pJWFFvdEo2VHBtZWdWeW5JMFpJNnp0SnFVQ2podW9ERHJWSzRB?= =?utf-8?B?QlU1eHh3NEY4MzZVaTc2MEo2MG5QRURWVURyNlV0S0xMd1FiOWo0TktOUVRp?= =?utf-8?B?bVNkVXdUUW5STnN6QVV1MmljM2VZOHZQV284S1BhUjdvZ2VkWmcreS9ma2R1?= =?utf-8?B?ejdKeVg5Q3pvb2RjUnRtaFJ3RHVENWszOE14K2NpVHNSd3BNR3BWTjhhMm8z?= =?utf-8?B?bTFNWUU3alJubnpadW5hSlJqOUpXUTJhR0ZralEwSlRjdmxPcWVSU0xFL2pQ?= =?utf-8?B?U1FqQXM1bURVNFdkcGhpT1RDVXUxUlUzREt3WXlwS2lSd0lTTlFHbmtTUjVS?= =?utf-8?B?Y29Ra1hWWkt3ckNHVFY5UDlVRzRPL0dNL2dpTjM1TStjaTlBdHdKcWVtbFZ1?= =?utf-8?B?cVV0bVIzR3N2TXVQd01nd0g5ejIrUk80N3dQR2lRMEFDQ0tGMThDdndLZGNj?= =?utf-8?B?VHR0djVUQTMrSWVYNXhtd1BNbnZLQnMrbExTMEo2a0Z2eFNDMjBQVkhySjUz?= =?utf-8?B?ZkJ2aW8wWWlva2RlZDZzaVZEZk9CY0FiVTRGL3c5MUFyNFJackpKdWhVb2VF?= =?utf-8?B?QXEzVjYrc3Vnb2JML0xBcUNXcXI3SjNjYXBCOGdNUmIxOXBHUUNMSHdhRFlj?= =?utf-8?B?eGtCZDIwOVFBYXdVN1o1K25Hd2o3UVQ5ZTZSR2ZRTHVEOWF1REJ5YjNydTB1?= =?utf-8?B?L0NVY1FqVDJqVzE3M3p5SU9YaGdMNFJNeG1HZWMzeU9xQ3B4QlhsUStQNERt?= =?utf-8?B?a2IyUXo2OW0ramlRZzdIbnFGRnpKVG1yN2dkRFA3SXFWdURiNmhRckNWUTBo?= =?utf-8?B?Tyt2UVhpdFJaL2tiM25ybFVYTHY4Y3FvVmdSQ0J4QUlDUXZXSmJDL1FTU2Zs?= =?utf-8?B?TXV2bENuUjRYSVBZeTBSeHErZEpZQndhRUx3UzJDU0drSXlVZDljUjAwM01T?= =?utf-8?B?eXNmcEg4SU94WjRxL0pBc2d0em9kYmFXVjdJQ2huVkFIOU5VajVHbS8wVCtF?= =?utf-8?B?SzFoc3BpYUhBV1hhQnQwUHNoR2dDMldQNzFGMTNqSGV4OEtkdUk4ZnN4L1o3?= =?utf-8?B?MitmTXJFWEQ1ajZDN0ZDc2E5Y0dOaEZ4aUllc3E5M0xNdzdlRHBqMVNVMmpD?= =?utf-8?B?T3F6ZVBtczBiV3p4ZWpzZ1p0SkI4WVlPOEMvamxyZk1EY2tTNEp5dndOeTJk?= =?utf-8?B?VSs1RjY2bjhTejBsNi9lVTRCOXVack9Dc2N6ZW05V0VyWkRaMFZWcFF2TE9r?= =?utf-8?B?OG9sY3VERnFUTEpHRnVpYWlJSUphZGo4S2swMFZpcWpiRmNMQ0F0RC9acENn?= =?utf-8?B?WUpQcW5PbDZ4eDlCckZtaTRhd1cyR0dkNFBqYldGQlVvaGZXSzM1Z1ArMFFk?= =?utf-8?B?SzlEalBaYnh0QWhGYmpDamJQSDFMWmxGRk52K2hxYzhTQTNuQmVZRTBOeGZt?= =?utf-8?B?UDg5QkgxZzQwYmtCREtyVGJXUG5KRFUyMVg4cTZoZ1I0S3BKVmdzWm9nazMw?= =?utf-8?Q?lnNIB+6kGI5l48D7ppgBr+iQtWOyBVX+Q8Kw=3D?= X-Microsoft-Antispam-Message-Info: xwkkgVsT82YWifaD/Oq9Of5jfqiFOq/L/Prp832/gnPWUNMmm1DiMK5mLjAbPBGzrJO70HXiHei+P6AK20gYAiu82PUQmLP2bLOh2M91Q1Rl+QO+ACpfcHQ4FpT/9FD8Z1ZThpyPn8mNRBCp80CxN3SYP+ZLD89IvMI0HJAxCvB0SRWIRssIDkB9s8u6wTDS X-Microsoft-Exchange-Diagnostics: 1; BY2PR03MB393; 6:9a7c4Ydzg9yzCGWjauC4Urz0uCVSczclsWQzMC/2l/FpR9vXst/HQH70KWiQnOUGuCu0Olf438tCG9SRuV3LCqTxs3p2AKRp3HjBIfMKSpneKRehLtCnyDnYO67d18mHyv87tQE5VtHDmVeuX1oVRBHT5+zTdL1S4daG3VTfxy1WaT5SAlauURLP+yUD2i8p28pdqLNxSAI3h71489MFRfLYraNOUBpbVT0uCxPX9LgPK3bnDfvcC04/ZR3AhNxUi4sqR2McDJMQzLDH/+Nm0HUW8JFyPXXL525tVq/MW6fnhSr4MyvXp+FISowj4NjSJuUK8NWf/VFpVTIHenmQJGp5fQQ88+1E7zmMlNYYiBmrs5vLjzL8u3whP5lHNSXixbcGTnj4eCaV/oA6JiWzwFAkwvbQ7u2qKpTiHUQ7Poia0uSPySgCGzIjlSrks07NnF/xyD095YmMbfiof4rKZA==; 5:TDGftM5GvNMs/ZNrWqGc4AUM5ethfy/vnezA3XU9UKOBpbAdpja1mWB0YYLGUWgpaykSySuyhm2eyuLoSbFpag0Hx6D4hMQHChExF+QBVU8oh3S8s68XMTwIiPJHts0f6S4sROPgmCvirucMnF66oQEHrdh8IEgr4g8LarSGL+o=; 24:Tn9nLY2IbF55c4QoXuJ/cLMTZtzPcwUCVmQdtavyQw+CWybeeHERcXRcYpS2SLoLqOSIx1B60XMllW1eP6Hq7siidWAd8pnLU+fRff74EbI= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BY2PR03MB393; 7:vHiICMZH2amvttQvIv26Iuj7dhw8DP76pQhEVFKuiYvPzp3aQ5q/FfY/n+RtLod0cUCgoO45PqhLMVZbE43eoVayfAae255DwGKa7gC20b10SveC3Yeha4EhedJibR2OlTxcLYCczyUEQ30baLIxTZ4Lm8Fg9WYdoivlwSTLIibNSLGEGwcdQ6IMkprFyrQnb9287zKPm1qMCIsMCJWTL0zXoe/u4sCsYFL+aYB/Bvuj7N+KxeH9a6UygDbi+CGg X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Mar 2018 16:55:43.3802 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: dd1b7373-75e2-4305-9fa1-08d594ccbf06 X-MS-Exchange-CrossTenant-Id: 5afe0b00-7697-4969-b663-5eab37d5f47e X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=5afe0b00-7697-4969-b663-5eab37d5f47e; Ip=[192.88.168.50]; Helo=[tx30smr01.am.freescale.net] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY2PR03MB393 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP There are two IV-related issues: (1) crypto API does not guarantee to provide an IV buffer that is DMAable, thus it's incorrect to DMA map it (2) for in-place decryption, since ciphertext is overwritten with plaintext, updated IV (req->info) will contain the last block of plaintext (instead of the last block of ciphertext) While these two issues could be fixed separately, it's straightforward to fix both in the same time - by using the {ablkcipher,aead}_edesc extended descriptor to store the IV that will be fed to the crypto engine; this allows for fixing (2) by saving req->src[last_block] in req->info directly, i.e. without allocating yet another temporary buffer. A side effect of the fix is that it's no longer possible to have the IV contiguous with req->src or req->dst. Code checking for this case is removed. Cc: # 4.14+ Fixes: a68a19380522 ("crypto: caam/qi - properly set IV after {en,de}crypt") Link: http://lkml.kernel.org/r/20170113084620.GF22022@gondor.apana.org.au Reported-by: Gilad Ben-Yossef Signed-off-by: Horia Geantă --- drivers/crypto/caam/caamalg_qi.c | 227 ++++++++++++++++++++------------------- 1 file changed, 116 insertions(+), 111 deletions(-) diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c index c2b5762d56a0..a6b76b3c8abe 100644 --- a/drivers/crypto/caam/caamalg_qi.c +++ b/drivers/crypto/caam/caamalg_qi.c @@ -726,7 +726,7 @@ static int xts_ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher, * @assoclen: associated data length, in CAAM endianness * @assoclen_dma: bus physical mapped address of req->assoclen * @drv_req: driver-specific request structure - * @sgt: the h/w link table + * @sgt: the h/w link table, followed by IV */ struct aead_edesc { int src_nents; @@ -737,9 +737,6 @@ struct aead_edesc { unsigned int assoclen; dma_addr_t assoclen_dma; struct caam_drv_req drv_req; -#define CAAM_QI_MAX_AEAD_SG \ - ((CAAM_QI_MEMCACHE_SIZE - offsetof(struct aead_edesc, sgt)) / \ - sizeof(struct qm_sg_entry)) struct qm_sg_entry sgt[0]; }; @@ -751,7 +748,7 @@ struct aead_edesc { * @qm_sg_bytes: length of dma mapped h/w link table * @qm_sg_dma: bus physical mapped address of h/w link table * @drv_req: driver-specific request structure - * @sgt: the h/w link table + * @sgt: the h/w link table, followed by IV */ struct ablkcipher_edesc { int src_nents; @@ -760,9 +757,6 @@ struct ablkcipher_edesc { int qm_sg_bytes; dma_addr_t qm_sg_dma; struct caam_drv_req drv_req; -#define CAAM_QI_MAX_ABLKCIPHER_SG \ - ((CAAM_QI_MEMCACHE_SIZE - offsetof(struct ablkcipher_edesc, sgt)) / \ - sizeof(struct qm_sg_entry)) struct qm_sg_entry sgt[0]; }; @@ -984,17 +978,8 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, } } - if ((alg->caam.rfc3686 && encrypt) || !alg->caam.geniv) { + if ((alg->caam.rfc3686 && encrypt) || !alg->caam.geniv) ivsize = crypto_aead_ivsize(aead); - iv_dma = dma_map_single(qidev, req->iv, ivsize, DMA_TO_DEVICE); - if (dma_mapping_error(qidev, iv_dma)) { - dev_err(qidev, "unable to map IV\n"); - caam_unmap(qidev, req->src, req->dst, src_nents, - dst_nents, 0, 0, op_type, 0, 0); - qi_cache_free(edesc); - return ERR_PTR(-ENOMEM); - } - } /* * Create S/G table: req->assoclen, [IV,] req->src [, req->dst]. @@ -1002,16 +987,33 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, */ qm_sg_ents = 1 + !!ivsize + mapped_src_nents + (mapped_dst_nents > 1 ? mapped_dst_nents : 0); - if (unlikely(qm_sg_ents > CAAM_QI_MAX_AEAD_SG)) { - dev_err(qidev, "Insufficient S/G entries: %d > %zu\n", - qm_sg_ents, CAAM_QI_MAX_AEAD_SG); - caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, - iv_dma, ivsize, op_type, 0, 0); + sg_table = &edesc->sgt[0]; + qm_sg_bytes = qm_sg_ents * sizeof(*sg_table); + if (unlikely(offsetof(struct aead_edesc, sgt) + qm_sg_bytes + ivsize > + CAAM_QI_MEMCACHE_SIZE)) { + dev_err(qidev, "No space for %d S/G entries and/or %dB IV\n", + qm_sg_ents, ivsize); + caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0, + 0, 0, 0, 0); qi_cache_free(edesc); return ERR_PTR(-ENOMEM); } - sg_table = &edesc->sgt[0]; - qm_sg_bytes = qm_sg_ents * sizeof(*sg_table); + + if (ivsize) { + u8 *iv = (u8 *)(sg_table + qm_sg_ents); + + /* Make sure IV is located in a DMAable area */ + memcpy(iv, req->iv, ivsize); + + iv_dma = dma_map_single(qidev, iv, ivsize, DMA_TO_DEVICE); + if (dma_mapping_error(qidev, iv_dma)) { + dev_err(qidev, "unable to map IV\n"); + caam_unmap(qidev, req->src, req->dst, src_nents, + dst_nents, 0, 0, 0, 0, 0); + qi_cache_free(edesc); + return ERR_PTR(-ENOMEM); + } + } edesc->src_nents = src_nents; edesc->dst_nents = dst_nents; @@ -1164,15 +1166,27 @@ static void ablkcipher_done(struct caam_drv_req *drv_req, u32 status) #endif ablkcipher_unmap(qidev, edesc, req); - qi_cache_free(edesc); + + /* In case initial IV was generated, copy it in GIVCIPHER request */ + if (edesc->drv_req.drv_ctx->op_type == GIVENCRYPT) { + u8 *iv; + struct skcipher_givcrypt_request *greq; + + greq = container_of(req, struct skcipher_givcrypt_request, + creq); + iv = (u8 *)edesc->sgt + edesc->qm_sg_bytes; + memcpy(greq->giv, iv, ivsize); + } /* * The crypto API expects us to set the IV (req->info) to the last * ciphertext block. This is used e.g. by the CTS mode. */ - scatterwalk_map_and_copy(req->info, req->dst, req->nbytes - ivsize, - ivsize, 0); + if (edesc->drv_req.drv_ctx->op_type != DECRYPT) + scatterwalk_map_and_copy(req->info, req->dst, req->nbytes - + ivsize, ivsize, 0); + qi_cache_free(edesc); ablkcipher_request_complete(req, status); } @@ -1187,9 +1201,9 @@ static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request int src_nents, mapped_src_nents, dst_nents = 0, mapped_dst_nents = 0; struct ablkcipher_edesc *edesc; dma_addr_t iv_dma; - bool in_contig; + u8 *iv; int ivsize = crypto_ablkcipher_ivsize(ablkcipher); - int dst_sg_idx, qm_sg_ents; + int dst_sg_idx, qm_sg_ents, qm_sg_bytes; struct qm_sg_entry *sg_table, *fd_sgt; struct caam_drv_ctx *drv_ctx; enum optype op_type = encrypt ? ENCRYPT : DECRYPT; @@ -1236,55 +1250,53 @@ static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request } } - iv_dma = dma_map_single(qidev, req->info, ivsize, DMA_TO_DEVICE); - if (dma_mapping_error(qidev, iv_dma)) { - dev_err(qidev, "unable to map IV\n"); - caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0, - 0, 0, 0, 0); - return ERR_PTR(-ENOMEM); - } - - if (mapped_src_nents == 1 && - iv_dma + ivsize == sg_dma_address(req->src)) { - in_contig = true; - qm_sg_ents = 0; - } else { - in_contig = false; - qm_sg_ents = 1 + mapped_src_nents; - } + qm_sg_ents = 1 + mapped_src_nents; dst_sg_idx = qm_sg_ents; qm_sg_ents += mapped_dst_nents > 1 ? mapped_dst_nents : 0; - if (unlikely(qm_sg_ents > CAAM_QI_MAX_ABLKCIPHER_SG)) { - dev_err(qidev, "Insufficient S/G entries: %d > %zu\n", - qm_sg_ents, CAAM_QI_MAX_ABLKCIPHER_SG); - caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, - iv_dma, ivsize, op_type, 0, 0); + qm_sg_bytes = qm_sg_ents * sizeof(struct qm_sg_entry); + if (unlikely(offsetof(struct ablkcipher_edesc, sgt) + qm_sg_bytes + + ivsize > CAAM_QI_MEMCACHE_SIZE)) { + dev_err(qidev, "No space for %d S/G entries and/or %dB IV\n", + qm_sg_ents, ivsize); + caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0, + 0, 0, 0, 0); return ERR_PTR(-ENOMEM); } - /* allocate space for base edesc and link tables */ + /* allocate space for base edesc, link tables and IV */ edesc = qi_cache_alloc(GFP_DMA | flags); if (unlikely(!edesc)) { dev_err(qidev, "could not allocate extended descriptor\n"); - caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, - iv_dma, ivsize, op_type, 0, 0); + caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0, + 0, 0, 0, 0); + return ERR_PTR(-ENOMEM); + } + + /* Make sure IV is located in a DMAable area */ + sg_table = &edesc->sgt[0]; + iv = (u8 *)(sg_table + qm_sg_ents); + memcpy(iv, req->info, ivsize); + + iv_dma = dma_map_single(qidev, iv, ivsize, DMA_TO_DEVICE); + if (dma_mapping_error(qidev, iv_dma)) { + dev_err(qidev, "unable to map IV\n"); + caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0, + 0, 0, 0, 0); + qi_cache_free(edesc); return ERR_PTR(-ENOMEM); } edesc->src_nents = src_nents; edesc->dst_nents = dst_nents; edesc->iv_dma = iv_dma; - sg_table = &edesc->sgt[0]; - edesc->qm_sg_bytes = qm_sg_ents * sizeof(*sg_table); + edesc->qm_sg_bytes = qm_sg_bytes; edesc->drv_req.app_ctx = req; edesc->drv_req.cbk = ablkcipher_done; edesc->drv_req.drv_ctx = drv_ctx; - if (!in_contig) { - dma_to_qm_sg_one(sg_table, iv_dma, ivsize, 0); - sg_to_qm_sg_last(req->src, mapped_src_nents, sg_table + 1, 0); - } + dma_to_qm_sg_one(sg_table, iv_dma, ivsize, 0); + sg_to_qm_sg_last(req->src, mapped_src_nents, sg_table + 1, 0); if (mapped_dst_nents > 1) sg_to_qm_sg_last(req->dst, mapped_dst_nents, sg_table + @@ -1302,20 +1314,12 @@ static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request fd_sgt = &edesc->drv_req.fd_sgt[0]; - if (!in_contig) - dma_to_qm_sg_one_last_ext(&fd_sgt[1], edesc->qm_sg_dma, - ivsize + req->nbytes, 0); - else - dma_to_qm_sg_one_last(&fd_sgt[1], iv_dma, ivsize + req->nbytes, - 0); + dma_to_qm_sg_one_last_ext(&fd_sgt[1], edesc->qm_sg_dma, + ivsize + req->nbytes, 0); if (req->src == req->dst) { - if (!in_contig) - dma_to_qm_sg_one_ext(&fd_sgt[0], edesc->qm_sg_dma + - sizeof(*sg_table), req->nbytes, 0); - else - dma_to_qm_sg_one(&fd_sgt[0], sg_dma_address(req->src), - req->nbytes, 0); + dma_to_qm_sg_one_ext(&fd_sgt[0], edesc->qm_sg_dma + + sizeof(*sg_table), req->nbytes, 0); } else if (mapped_dst_nents > 1) { dma_to_qm_sg_one_ext(&fd_sgt[0], edesc->qm_sg_dma + dst_sg_idx * sizeof(*sg_table), req->nbytes, 0); @@ -1339,10 +1343,10 @@ static struct ablkcipher_edesc *ablkcipher_giv_edesc_alloc( int src_nents, mapped_src_nents, dst_nents, mapped_dst_nents; struct ablkcipher_edesc *edesc; dma_addr_t iv_dma; - bool out_contig; + u8 *iv; int ivsize = crypto_ablkcipher_ivsize(ablkcipher); struct qm_sg_entry *sg_table, *fd_sgt; - int dst_sg_idx, qm_sg_ents; + int dst_sg_idx, qm_sg_ents, qm_sg_bytes; struct caam_drv_ctx *drv_ctx; drv_ctx = get_drv_ctx(ctx, GIVENCRYPT); @@ -1390,46 +1394,45 @@ static struct ablkcipher_edesc *ablkcipher_giv_edesc_alloc( mapped_dst_nents = src_nents; } - iv_dma = dma_map_single(qidev, creq->giv, ivsize, DMA_FROM_DEVICE); - if (dma_mapping_error(qidev, iv_dma)) { - dev_err(qidev, "unable to map IV\n"); - caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0, - 0, 0, 0, 0); - return ERR_PTR(-ENOMEM); - } - qm_sg_ents = mapped_src_nents > 1 ? mapped_src_nents : 0; dst_sg_idx = qm_sg_ents; - if (mapped_dst_nents == 1 && - iv_dma + ivsize == sg_dma_address(req->dst)) { - out_contig = true; - } else { - out_contig = false; - qm_sg_ents += 1 + mapped_dst_nents; - } - if (unlikely(qm_sg_ents > CAAM_QI_MAX_ABLKCIPHER_SG)) { - dev_err(qidev, "Insufficient S/G entries: %d > %zu\n", - qm_sg_ents, CAAM_QI_MAX_ABLKCIPHER_SG); - caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, - iv_dma, ivsize, GIVENCRYPT, 0, 0); + qm_sg_ents += 1 + mapped_dst_nents; + qm_sg_bytes = qm_sg_ents * sizeof(struct qm_sg_entry); + if (unlikely(offsetof(struct ablkcipher_edesc, sgt) + qm_sg_bytes + + ivsize > CAAM_QI_MEMCACHE_SIZE)) { + dev_err(qidev, "No space for %d S/G entries and/or %dB IV\n", + qm_sg_ents, ivsize); + caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0, + 0, 0, 0, 0); return ERR_PTR(-ENOMEM); } - /* allocate space for base edesc and link tables */ + /* allocate space for base edesc, link tables and IV */ edesc = qi_cache_alloc(GFP_DMA | flags); if (!edesc) { dev_err(qidev, "could not allocate extended descriptor\n"); - caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, - iv_dma, ivsize, GIVENCRYPT, 0, 0); + caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0, + 0, 0, 0, 0); + return ERR_PTR(-ENOMEM); + } + + /* Make sure IV is located in a DMAable area */ + sg_table = &edesc->sgt[0]; + iv = (u8 *)(sg_table + qm_sg_ents); + iv_dma = dma_map_single(qidev, iv, ivsize, DMA_FROM_DEVICE); + if (dma_mapping_error(qidev, iv_dma)) { + dev_err(qidev, "unable to map IV\n"); + caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0, + 0, 0, 0, 0); + qi_cache_free(edesc); return ERR_PTR(-ENOMEM); } edesc->src_nents = src_nents; edesc->dst_nents = dst_nents; edesc->iv_dma = iv_dma; - sg_table = &edesc->sgt[0]; - edesc->qm_sg_bytes = qm_sg_ents * sizeof(*sg_table); + edesc->qm_sg_bytes = qm_sg_bytes; edesc->drv_req.app_ctx = req; edesc->drv_req.cbk = ablkcipher_done; edesc->drv_req.drv_ctx = drv_ctx; @@ -1437,11 +1440,9 @@ static struct ablkcipher_edesc *ablkcipher_giv_edesc_alloc( if (mapped_src_nents > 1) sg_to_qm_sg_last(req->src, mapped_src_nents, sg_table, 0); - if (!out_contig) { - dma_to_qm_sg_one(sg_table + dst_sg_idx, iv_dma, ivsize, 0); - sg_to_qm_sg_last(req->dst, mapped_dst_nents, sg_table + - dst_sg_idx + 1, 0); - } + dma_to_qm_sg_one(sg_table + dst_sg_idx, iv_dma, ivsize, 0); + sg_to_qm_sg_last(req->dst, mapped_dst_nents, sg_table + dst_sg_idx + 1, + 0); edesc->qm_sg_dma = dma_map_single(qidev, sg_table, edesc->qm_sg_bytes, DMA_TO_DEVICE); @@ -1462,13 +1463,8 @@ static struct ablkcipher_edesc *ablkcipher_giv_edesc_alloc( dma_to_qm_sg_one(&fd_sgt[1], sg_dma_address(req->src), req->nbytes, 0); - if (!out_contig) - dma_to_qm_sg_one_ext(&fd_sgt[0], edesc->qm_sg_dma + dst_sg_idx * - sizeof(*sg_table), ivsize + req->nbytes, - 0); - else - dma_to_qm_sg_one(&fd_sgt[0], sg_dma_address(req->dst), - ivsize + req->nbytes, 0); + dma_to_qm_sg_one_ext(&fd_sgt[0], edesc->qm_sg_dma + dst_sg_idx * + sizeof(*sg_table), ivsize + req->nbytes, 0); return edesc; } @@ -1478,6 +1474,7 @@ static inline int ablkcipher_crypt(struct ablkcipher_request *req, bool encrypt) struct ablkcipher_edesc *edesc; struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req); struct caam_ctx *ctx = crypto_ablkcipher_ctx(ablkcipher); + int ivsize = crypto_ablkcipher_ivsize(ablkcipher); int ret; if (unlikely(caam_congested)) @@ -1488,6 +1485,14 @@ static inline int ablkcipher_crypt(struct ablkcipher_request *req, bool encrypt) if (IS_ERR(edesc)) return PTR_ERR(edesc); + /* + * The crypto API expects us to set the IV (req->info) to the last + * ciphertext block. + */ + if (!encrypt) + scatterwalk_map_and_copy(req->info, req->src, req->nbytes - + ivsize, ivsize, 0); + ret = caam_qi_enqueue(ctx->qidev, &edesc->drv_req); if (!ret) { ret = -EINPROGRESS;