From patchwork Fri Dec 2 09:20:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13062449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9978CC4167B for ; Fri, 2 Dec 2022 09:20:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F14CC6B0071; Fri, 2 Dec 2022 04:20:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E9B196B0073; Fri, 2 Dec 2022 04:20:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D473A6B0074; Fri, 2 Dec 2022 04:20:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id BFBD76B0071 for ; Fri, 2 Dec 2022 04:20:58 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 820B3AB366 for ; Fri, 2 Dec 2022 09:20:58 +0000 (UTC) X-FDA: 80196821796.02.81D7254 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by imf27.hostedemail.com (Postfix) with ESMTP id 0759840015 for ; Fri, 2 Dec 2022 09:20:57 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of herbert@gondor.apana.org.au designates 216.24.177.18 as permitted sender) smtp.mailfrom=herbert@gondor.apana.org.au; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669972858; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references:references; bh=sL3gyv6+bJUrIyiKts2/JVc9FN7F1g088CJEqqDWOcs=; b=GH65NR38KatWYFB5oa+5HNILTHTklRC6el8JQrNTqfsHcf8hxFN6gEMN/u2Szr0BfpqiVg heN0KmkDeymXaqMisWj4UN3XdX04Nv3P4YBl5/0v8Z/bF2X4/gFIYjtlbsxffglHcjzIkP ELsZCNO87iMJUgfw6/JZyK5dAv1V5a0= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of herbert@gondor.apana.org.au designates 216.24.177.18 as permitted sender) smtp.mailfrom=herbert@gondor.apana.org.au; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669972858; a=rsa-sha256; cv=none; b=SjpjvC4kHTMgAWno5dr+76zQ+HdAkZdjKu0ORzrXcD0oeTgHlHyBlLq6gKvuYgrsvx5SYJ ySqDnwnh1P/NqkY3zhaQ690vpCA5Ot8oncMxnLTUOyg7tGAQfkUY5xaFucUc4DeR3AGDPf /pqShecy3F2iRGD/6Gg7GRhDXVXPQfU= Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1p12Dv-003Ao0-0r; Fri, 02 Dec 2022 17:20:48 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Fri, 02 Dec 2022 17:20:47 +0800 From: "Herbert Xu" Date: Fri, 02 Dec 2022 17:20:47 +0800 Subject: [PATCH 1/10] crypto: cavium - Set DMA alignment explicitly References: To: Catalin Marinas , Ard Biesheuvel , Will Deacon , Marc Zyngier , Arnd Bergmann , Greg Kroah-Hartman , Andrew Morton , Linus Torvalds , Linux Memory Management List , Linux ARM , Linux Kernel Mailing List , "David S. Miller" , Linux Crypto Mailing List Message-Id: X-Spamd-Result: default: False [-3.20 / 9.00]; BAYES_HAM(-3.00)[99.99%]; R_SPF_ALLOW(-0.20)[+mx]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; FROM_EQ_ENVFROM(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; R_DKIM_NA(0.00)[]; TO_DN_ALL(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; DMARC_NA(0.00)[apana.org.au]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-Rspamd-Queue-Id: 0759840015 X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: hk3qdtpphwz66jeno4pf3hej11r3twur X-HE-Tag: 1669972857-441572 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This driver has been implicitly relying on kmalloc alignment to be sufficient for DMA. This may no longer be the case with upcoming arm64 changes. This patch changes it to explicitly request DMA alignment from the Crypto API. Signed-off-by: Herbert Xu --- drivers/crypto/cavium/cpt/cptvf_algs.c | 10 +++++----- drivers/crypto/cavium/nitrox/nitrox_aead.c | 12 ++++++------ 2 files changed, 11 insertions(+), 11 deletions(-) diff --git a/drivers/crypto/cavium/cpt/cptvf_algs.c b/drivers/crypto/cavium/cpt/cptvf_algs.c index ce3b91c612f0..9eca0c302186 100644 --- a/drivers/crypto/cavium/cpt/cptvf_algs.c +++ b/drivers/crypto/cavium/cpt/cptvf_algs.c @@ -97,7 +97,7 @@ static inline u32 create_ctx_hdr(struct skcipher_request *req, u32 enc, { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct cvm_enc_ctx *ctx = crypto_skcipher_ctx(tfm); - struct cvm_req_ctx *rctx = skcipher_request_ctx(req); + struct cvm_req_ctx *rctx = skcipher_request_ctx_dma(req); struct fc_context *fctx = &rctx->fctx; u32 enc_iv_len = crypto_skcipher_ivsize(tfm); struct cpt_request_info *req_info = &rctx->cpt_req; @@ -151,7 +151,7 @@ static inline u32 create_ctx_hdr(struct skcipher_request *req, u32 enc, static inline u32 create_input_list(struct skcipher_request *req, u32 enc, u32 enc_iv_len) { - struct cvm_req_ctx *rctx = skcipher_request_ctx(req); + struct cvm_req_ctx *rctx = skcipher_request_ctx_dma(req); struct cpt_request_info *req_info = &rctx->cpt_req; u32 argcnt = 0; @@ -173,7 +173,7 @@ static inline void store_cb_info(struct skcipher_request *req, static inline void create_output_list(struct skcipher_request *req, u32 enc_iv_len) { - struct cvm_req_ctx *rctx = skcipher_request_ctx(req); + struct cvm_req_ctx *rctx = skcipher_request_ctx_dma(req); struct cpt_request_info *req_info = &rctx->cpt_req; u32 argcnt = 0; @@ -193,7 +193,7 @@ static inline void create_output_list(struct skcipher_request *req, static inline int cvm_enc_dec(struct skcipher_request *req, u32 enc) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct cvm_req_ctx *rctx = skcipher_request_ctx(req); + struct cvm_req_ctx *rctx = skcipher_request_ctx_dma(req); u32 enc_iv_len = crypto_skcipher_ivsize(tfm); struct fc_context *fctx = &rctx->fctx; struct cpt_request_info *req_info = &rctx->cpt_req; @@ -335,7 +335,7 @@ static int cvm_ecb_des3_setkey(struct crypto_skcipher *cipher, const u8 *key, static int cvm_enc_dec_init(struct crypto_skcipher *tfm) { - crypto_skcipher_set_reqsize(tfm, sizeof(struct cvm_req_ctx)); + crypto_skcipher_set_reqsize_dma(tfm, sizeof(struct cvm_req_ctx)); return 0; } diff --git a/drivers/crypto/cavium/nitrox/nitrox_aead.c b/drivers/crypto/cavium/nitrox/nitrox_aead.c index c93c4e41d267..0653484df23f 100644 --- a/drivers/crypto/cavium/nitrox/nitrox_aead.c +++ b/drivers/crypto/cavium/nitrox/nitrox_aead.c @@ -392,7 +392,7 @@ static int nitrox_rfc4106_setauthsize(struct crypto_aead *aead, static int nitrox_rfc4106_set_aead_rctx_sglist(struct aead_request *areq) { - struct nitrox_rfc4106_rctx *rctx = aead_request_ctx(areq); + struct nitrox_rfc4106_rctx *rctx = aead_request_ctx_dma(areq); struct nitrox_aead_rctx *aead_rctx = &rctx->base; unsigned int assoclen = areq->assoclen - GCM_RFC4106_IV_SIZE; struct scatterlist *sg; @@ -424,7 +424,7 @@ static int nitrox_rfc4106_set_aead_rctx_sglist(struct aead_request *areq) static void nitrox_rfc4106_callback(void *arg, int err) { struct aead_request *areq = arg; - struct nitrox_rfc4106_rctx *rctx = aead_request_ctx(areq); + struct nitrox_rfc4106_rctx *rctx = aead_request_ctx_dma(areq); struct nitrox_kcrypt_request *nkreq = &rctx->base.nkreq; free_src_sglist(nkreq); @@ -441,7 +441,7 @@ static int nitrox_rfc4106_enc(struct aead_request *areq) { struct crypto_aead *aead = crypto_aead_reqtfm(areq); struct nitrox_crypto_ctx *nctx = crypto_aead_ctx(aead); - struct nitrox_rfc4106_rctx *rctx = aead_request_ctx(areq); + struct nitrox_rfc4106_rctx *rctx = aead_request_ctx_dma(areq); struct nitrox_aead_rctx *aead_rctx = &rctx->base; struct se_crypto_request *creq = &aead_rctx->nkreq.creq; int ret; @@ -472,7 +472,7 @@ static int nitrox_rfc4106_enc(struct aead_request *areq) static int nitrox_rfc4106_dec(struct aead_request *areq) { struct crypto_aead *aead = crypto_aead_reqtfm(areq); - struct nitrox_crypto_ctx *nctx = crypto_aead_ctx(aead); + struct nitrox_crypto_ctx *nctx = crypto_aead_ctx_dma(aead); struct nitrox_rfc4106_rctx *rctx = aead_request_ctx(areq); struct nitrox_aead_rctx *aead_rctx = &rctx->base; struct se_crypto_request *creq = &aead_rctx->nkreq.creq; @@ -510,8 +510,8 @@ static int nitrox_rfc4106_init(struct crypto_aead *aead) if (ret) return ret; - crypto_aead_set_reqsize(aead, sizeof(struct aead_request) + - sizeof(struct nitrox_rfc4106_rctx)); + crypto_aead_set_reqsize_dma(aead, sizeof(struct aead_request) + + sizeof(struct nitrox_rfc4106_rctx)); return 0; } From patchwork Fri Dec 2 09:20:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13062450 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44DB4C4332F for ; Fri, 2 Dec 2022 09:21:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B91E96B0073; Fri, 2 Dec 2022 04:21:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B1B7D6B0074; Fri, 2 Dec 2022 04:21:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 994AD8E0001; Fri, 2 Dec 2022 04:21:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 8048B6B0073 for ; Fri, 2 Dec 2022 04:21:03 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 5AFF2AAEF2 for ; Fri, 2 Dec 2022 09:21:03 +0000 (UTC) X-FDA: 80196822006.18.1429048 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by imf22.hostedemail.com (Postfix) with ESMTP id 720C7C0019 for ; Fri, 2 Dec 2022 09:21:02 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf22.hostedemail.com: domain of herbert@gondor.apana.org.au designates 216.24.177.18 as permitted sender) smtp.mailfrom=herbert@gondor.apana.org.au ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669972863; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references:references; bh=VD81lw5uJqKvjaLTJkVTimRnJo29K3RCMSWrF6flh7M=; b=I8ENPpy6TBs17LMgwIVK8j/lWhzwKYAxJpVDDOcHaTfcA+bnOhX66ZROeTOmgqOLMij97Y bRshDXRJmjlsEEwYqYW/XyMqZEtSQQ32hhqjEuAXqg+qIZvBGnX5yvqmCGKKLtWtsPOofH tNVksMp5OtMr1navoVJLOuDp9M/QnFY= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf22.hostedemail.com: domain of herbert@gondor.apana.org.au designates 216.24.177.18 as permitted sender) smtp.mailfrom=herbert@gondor.apana.org.au ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669972863; a=rsa-sha256; cv=none; b=iIsWzZ438Bzn261h2fGp7xyHvreMlW5JSFANO6UVwjGNNSm1o+0hcGBAKxz9wbWj+GG90/ kRldzqkJkUXIh/v8RUU/ePXiosYmHt4x6OgrkWV5Qewm0j/iuPOAp18XTEOvCKd+HmKWVE 5iOOgUJ7WgaEfiWM4zQzrgMsQAmiuYA= Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1p12Dx-003Ao9-4q; Fri, 02 Dec 2022 17:20:50 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Fri, 02 Dec 2022 17:20:49 +0800 From: "Herbert Xu" Date: Fri, 02 Dec 2022 17:20:49 +0800 Subject: [PATCH 2/10] crypto: ccp - Set DMA alignment explicitly References: To: Catalin Marinas , Ard Biesheuvel , Will Deacon , Marc Zyngier , Arnd Bergmann , Greg Kroah-Hartman , Andrew Morton , Linus Torvalds , Linux Memory Management List , Linux ARM , Linux Kernel Mailing List , "David S. Miller" , Linux Crypto Mailing List Message-Id: X-Stat-Signature: owy8fo7xqdfbwnegnsqiuo9mhq1jhrmc X-Spamd-Result: default: False [-3.20 / 9.00]; BAYES_HAM(-3.00)[99.99%]; R_SPF_ALLOW(-0.20)[+mx]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; FROM_EQ_ENVFROM(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; R_DKIM_NA(0.00)[]; TO_DN_ALL(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; DMARC_NA(0.00)[apana.org.au]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-Rspamd-Queue-Id: 720C7C0019 X-Rspamd-Server: rspam08 X-Rspam-User: X-HE-Tag: 1669972862-921595 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This driver has been implicitly relying on kmalloc alignment to be sufficient for DMA. This may no longer be the case with upcoming arm64 changes. This patch changes it to explicitly request DMA alignment from the Crypto API. Signed-off-by: Herbert Xu --- drivers/crypto/ccp/ccp-crypto-aes-cmac.c | 21 +++++++++++---------- drivers/crypto/ccp/ccp-crypto-aes-galois.c | 12 ++++++------ drivers/crypto/ccp/ccp-crypto-aes-xts.c | 20 +++++++++++--------- drivers/crypto/ccp/ccp-crypto-aes.c | 29 +++++++++++++++-------------- drivers/crypto/ccp/ccp-crypto-des3.c | 17 +++++++++-------- drivers/crypto/ccp/ccp-crypto-main.c | 4 ++-- drivers/crypto/ccp/ccp-crypto-rsa.c | 18 +++++++++--------- drivers/crypto/ccp/ccp-crypto-sha.c | 26 +++++++++++++------------- 8 files changed, 76 insertions(+), 71 deletions(-) diff --git a/drivers/crypto/ccp/ccp-crypto-aes-cmac.c b/drivers/crypto/ccp/ccp-crypto-aes-cmac.c index 11a305fa19e6..d8426bdf3190 100644 --- a/drivers/crypto/ccp/ccp-crypto-aes-cmac.c +++ b/drivers/crypto/ccp/ccp-crypto-aes-cmac.c @@ -25,7 +25,7 @@ static int ccp_aes_cmac_complete(struct crypto_async_request *async_req, { struct ahash_request *req = ahash_request_cast(async_req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct ccp_aes_cmac_req_ctx *rctx = ahash_request_ctx(req); + struct ccp_aes_cmac_req_ctx *rctx = ahash_request_ctx_dma(req); unsigned int digest_size = crypto_ahash_digestsize(tfm); if (ret) @@ -56,8 +56,8 @@ static int ccp_do_cmac_update(struct ahash_request *req, unsigned int nbytes, unsigned int final) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct ccp_ctx *ctx = crypto_ahash_ctx(tfm); - struct ccp_aes_cmac_req_ctx *rctx = ahash_request_ctx(req); + struct ccp_ctx *ctx = crypto_ahash_ctx_dma(tfm); + struct ccp_aes_cmac_req_ctx *rctx = ahash_request_ctx_dma(req); struct scatterlist *sg, *cmac_key_sg = NULL; unsigned int block_size = crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm)); @@ -182,7 +182,7 @@ static int ccp_do_cmac_update(struct ahash_request *req, unsigned int nbytes, static int ccp_aes_cmac_init(struct ahash_request *req) { - struct ccp_aes_cmac_req_ctx *rctx = ahash_request_ctx(req); + struct ccp_aes_cmac_req_ctx *rctx = ahash_request_ctx_dma(req); memset(rctx, 0, sizeof(*rctx)); @@ -219,7 +219,7 @@ static int ccp_aes_cmac_digest(struct ahash_request *req) static int ccp_aes_cmac_export(struct ahash_request *req, void *out) { - struct ccp_aes_cmac_req_ctx *rctx = ahash_request_ctx(req); + struct ccp_aes_cmac_req_ctx *rctx = ahash_request_ctx_dma(req); struct ccp_aes_cmac_exp_ctx state; /* Don't let anything leak to 'out' */ @@ -238,7 +238,7 @@ static int ccp_aes_cmac_export(struct ahash_request *req, void *out) static int ccp_aes_cmac_import(struct ahash_request *req, const void *in) { - struct ccp_aes_cmac_req_ctx *rctx = ahash_request_ctx(req); + struct ccp_aes_cmac_req_ctx *rctx = ahash_request_ctx_dma(req); struct ccp_aes_cmac_exp_ctx state; /* 'in' may not be aligned so memcpy to local variable */ @@ -256,7 +256,7 @@ static int ccp_aes_cmac_import(struct ahash_request *req, const void *in) static int ccp_aes_cmac_setkey(struct crypto_ahash *tfm, const u8 *key, unsigned int key_len) { - struct ccp_ctx *ctx = crypto_tfm_ctx(crypto_ahash_tfm(tfm)); + struct ccp_ctx *ctx = crypto_ahash_ctx_dma(tfm); struct ccp_crypto_ahash_alg *alg = ccp_crypto_ahash_alg(crypto_ahash_tfm(tfm)); u64 k0_hi, k0_lo, k1_hi, k1_lo, k2_hi, k2_lo; @@ -334,13 +334,14 @@ static int ccp_aes_cmac_setkey(struct crypto_ahash *tfm, const u8 *key, static int ccp_aes_cmac_cra_init(struct crypto_tfm *tfm) { - struct ccp_ctx *ctx = crypto_tfm_ctx(tfm); + struct ccp_ctx *ctx = crypto_tfm_ctx_dma(tfm); struct crypto_ahash *ahash = __crypto_ahash_cast(tfm); ctx->complete = ccp_aes_cmac_complete; ctx->u.aes.key_len = 0; - crypto_ahash_set_reqsize(ahash, sizeof(struct ccp_aes_cmac_req_ctx)); + crypto_ahash_set_reqsize_dma(ahash, + sizeof(struct ccp_aes_cmac_req_ctx)); return 0; } @@ -382,7 +383,7 @@ int ccp_register_aes_cmac_algs(struct list_head *head) CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_NEED_FALLBACK; base->cra_blocksize = AES_BLOCK_SIZE; - base->cra_ctxsize = sizeof(struct ccp_ctx); + base->cra_ctxsize = sizeof(struct ccp_ctx) + crypto_dma_padding(); base->cra_priority = CCP_CRA_PRIORITY; base->cra_init = ccp_aes_cmac_cra_init; base->cra_module = THIS_MODULE; diff --git a/drivers/crypto/ccp/ccp-crypto-aes-galois.c b/drivers/crypto/ccp/ccp-crypto-aes-galois.c index 1c1c939f5c39..b1dbb8cea559 100644 --- a/drivers/crypto/ccp/ccp-crypto-aes-galois.c +++ b/drivers/crypto/ccp/ccp-crypto-aes-galois.c @@ -29,7 +29,7 @@ static int ccp_aes_gcm_complete(struct crypto_async_request *async_req, int ret) static int ccp_aes_gcm_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int key_len) { - struct ccp_ctx *ctx = crypto_aead_ctx(tfm); + struct ccp_ctx *ctx = crypto_aead_ctx_dma(tfm); switch (key_len) { case AES_KEYSIZE_128: @@ -76,8 +76,8 @@ static int ccp_aes_gcm_setauthsize(struct crypto_aead *tfm, static int ccp_aes_gcm_crypt(struct aead_request *req, bool encrypt) { struct crypto_aead *tfm = crypto_aead_reqtfm(req); - struct ccp_ctx *ctx = crypto_aead_ctx(tfm); - struct ccp_aes_req_ctx *rctx = aead_request_ctx(req); + struct ccp_ctx *ctx = crypto_aead_ctx_dma(tfm); + struct ccp_aes_req_ctx *rctx = aead_request_ctx_dma(req); struct scatterlist *iv_sg = NULL; unsigned int iv_len = 0; int i; @@ -148,12 +148,12 @@ static int ccp_aes_gcm_decrypt(struct aead_request *req) static int ccp_aes_gcm_cra_init(struct crypto_aead *tfm) { - struct ccp_ctx *ctx = crypto_aead_ctx(tfm); + struct ccp_ctx *ctx = crypto_aead_ctx_dma(tfm); ctx->complete = ccp_aes_gcm_complete; ctx->u.aes.key_len = 0; - crypto_aead_set_reqsize(tfm, sizeof(struct ccp_aes_req_ctx)); + crypto_aead_set_reqsize_dma(tfm, sizeof(struct ccp_aes_req_ctx)); return 0; } @@ -176,7 +176,7 @@ static struct aead_alg ccp_aes_gcm_defaults = { CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = AES_BLOCK_SIZE, - .cra_ctxsize = sizeof(struct ccp_ctx), + .cra_ctxsize = sizeof(struct ccp_ctx) + CRYPTO_DMA_PADDING, .cra_priority = CCP_CRA_PRIORITY, .cra_exit = ccp_aes_gcm_cra_exit, .cra_module = THIS_MODULE, diff --git a/drivers/crypto/ccp/ccp-crypto-aes-xts.c b/drivers/crypto/ccp/ccp-crypto-aes-xts.c index 6849261ca47d..93f735d6b02b 100644 --- a/drivers/crypto/ccp/ccp-crypto-aes-xts.c +++ b/drivers/crypto/ccp/ccp-crypto-aes-xts.c @@ -62,7 +62,7 @@ static struct ccp_unit_size_map xts_unit_sizes[] = { static int ccp_aes_xts_complete(struct crypto_async_request *async_req, int ret) { struct skcipher_request *req = skcipher_request_cast(async_req); - struct ccp_aes_req_ctx *rctx = skcipher_request_ctx(req); + struct ccp_aes_req_ctx *rctx = skcipher_request_ctx_dma(req); if (ret) return ret; @@ -75,7 +75,7 @@ static int ccp_aes_xts_complete(struct crypto_async_request *async_req, int ret) static int ccp_aes_xts_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int key_len) { - struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); + struct ccp_ctx *ctx = crypto_skcipher_ctx_dma(tfm); unsigned int ccpversion = ccp_version(); int ret; @@ -105,8 +105,8 @@ static int ccp_aes_xts_crypt(struct skcipher_request *req, unsigned int encrypt) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); - struct ccp_aes_req_ctx *rctx = skcipher_request_ctx(req); + struct ccp_ctx *ctx = crypto_skcipher_ctx_dma(tfm); + struct ccp_aes_req_ctx *rctx = skcipher_request_ctx_dma(req); unsigned int ccpversion = ccp_version(); unsigned int fallback = 0; unsigned int unit; @@ -196,7 +196,7 @@ static int ccp_aes_xts_decrypt(struct skcipher_request *req) static int ccp_aes_xts_init_tfm(struct crypto_skcipher *tfm) { - struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); + struct ccp_ctx *ctx = crypto_skcipher_ctx_dma(tfm); struct crypto_skcipher *fallback_tfm; ctx->complete = ccp_aes_xts_complete; @@ -210,15 +210,16 @@ static int ccp_aes_xts_init_tfm(struct crypto_skcipher *tfm) } ctx->u.aes.tfm_skcipher = fallback_tfm; - crypto_skcipher_set_reqsize(tfm, sizeof(struct ccp_aes_req_ctx) + - crypto_skcipher_reqsize(fallback_tfm)); + crypto_skcipher_set_reqsize_dma(tfm, + sizeof(struct ccp_aes_req_ctx) + + crypto_skcipher_reqsize(fallback_tfm)); return 0; } static void ccp_aes_xts_exit_tfm(struct crypto_skcipher *tfm) { - struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); + struct ccp_ctx *ctx = crypto_skcipher_ctx_dma(tfm); crypto_free_skcipher(ctx->u.aes.tfm_skcipher); } @@ -246,7 +247,8 @@ static int ccp_register_aes_xts_alg(struct list_head *head, CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_NEED_FALLBACK; alg->base.cra_blocksize = AES_BLOCK_SIZE; - alg->base.cra_ctxsize = sizeof(struct ccp_ctx); + alg->base.cra_ctxsize = sizeof(struct ccp_ctx) + + crypto_dma_padding(); alg->base.cra_priority = CCP_CRA_PRIORITY; alg->base.cra_module = THIS_MODULE; diff --git a/drivers/crypto/ccp/ccp-crypto-aes.c b/drivers/crypto/ccp/ccp-crypto-aes.c index bed331953ff9..918e223f21b6 100644 --- a/drivers/crypto/ccp/ccp-crypto-aes.c +++ b/drivers/crypto/ccp/ccp-crypto-aes.c @@ -22,8 +22,9 @@ static int ccp_aes_complete(struct crypto_async_request *async_req, int ret) { struct skcipher_request *req = skcipher_request_cast(async_req); - struct ccp_ctx *ctx = crypto_tfm_ctx(req->base.tfm); - struct ccp_aes_req_ctx *rctx = skcipher_request_ctx(req); + struct ccp_ctx *ctx = crypto_skcipher_ctx_dma( + crypto_skcipher_reqtfm(req)); + struct ccp_aes_req_ctx *rctx = skcipher_request_ctx_dma(req); if (ret) return ret; @@ -38,7 +39,7 @@ static int ccp_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int key_len) { struct ccp_crypto_skcipher_alg *alg = ccp_crypto_skcipher_alg(tfm); - struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); + struct ccp_ctx *ctx = crypto_skcipher_ctx_dma(tfm); switch (key_len) { case AES_KEYSIZE_128: @@ -65,8 +66,8 @@ static int ccp_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, static int ccp_aes_crypt(struct skcipher_request *req, bool encrypt) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); - struct ccp_aes_req_ctx *rctx = skcipher_request_ctx(req); + struct ccp_ctx *ctx = crypto_skcipher_ctx_dma(tfm); + struct ccp_aes_req_ctx *rctx = skcipher_request_ctx_dma(req); struct scatterlist *iv_sg = NULL; unsigned int iv_len = 0; @@ -118,7 +119,7 @@ static int ccp_aes_decrypt(struct skcipher_request *req) static int ccp_aes_init_tfm(struct crypto_skcipher *tfm) { - struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); + struct ccp_ctx *ctx = crypto_skcipher_ctx_dma(tfm); ctx->complete = ccp_aes_complete; ctx->u.aes.key_len = 0; @@ -132,7 +133,7 @@ static int ccp_aes_rfc3686_complete(struct crypto_async_request *async_req, int ret) { struct skcipher_request *req = skcipher_request_cast(async_req); - struct ccp_aes_req_ctx *rctx = skcipher_request_ctx(req); + struct ccp_aes_req_ctx *rctx = skcipher_request_ctx_dma(req); /* Restore the original pointer */ req->iv = rctx->rfc3686_info; @@ -143,7 +144,7 @@ static int ccp_aes_rfc3686_complete(struct crypto_async_request *async_req, static int ccp_aes_rfc3686_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int key_len) { - struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); + struct ccp_ctx *ctx = crypto_skcipher_ctx_dma(tfm); if (key_len < CTR_RFC3686_NONCE_SIZE) return -EINVAL; @@ -157,8 +158,8 @@ static int ccp_aes_rfc3686_setkey(struct crypto_skcipher *tfm, const u8 *key, static int ccp_aes_rfc3686_crypt(struct skcipher_request *req, bool encrypt) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); - struct ccp_aes_req_ctx *rctx = skcipher_request_ctx(req); + struct ccp_ctx *ctx = crypto_skcipher_ctx_dma(tfm); + struct ccp_aes_req_ctx *rctx = skcipher_request_ctx_dma(req); u8 *iv; /* Initialize the CTR block */ @@ -190,12 +191,12 @@ static int ccp_aes_rfc3686_decrypt(struct skcipher_request *req) static int ccp_aes_rfc3686_init_tfm(struct crypto_skcipher *tfm) { - struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); + struct ccp_ctx *ctx = crypto_skcipher_ctx_dma(tfm); ctx->complete = ccp_aes_rfc3686_complete; ctx->u.aes.key_len = 0; - crypto_skcipher_set_reqsize(tfm, sizeof(struct ccp_aes_req_ctx)); + crypto_skcipher_set_reqsize_dma(tfm, sizeof(struct ccp_aes_req_ctx)); return 0; } @@ -213,7 +214,7 @@ static const struct skcipher_alg ccp_aes_defaults = { CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_NEED_FALLBACK, .base.cra_blocksize = AES_BLOCK_SIZE, - .base.cra_ctxsize = sizeof(struct ccp_ctx), + .base.cra_ctxsize = sizeof(struct ccp_ctx) + CRYPTO_DMA_PADDING, .base.cra_priority = CCP_CRA_PRIORITY, .base.cra_module = THIS_MODULE, }; @@ -231,7 +232,7 @@ static const struct skcipher_alg ccp_aes_rfc3686_defaults = { CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_NEED_FALLBACK, .base.cra_blocksize = CTR_RFC3686_BLOCK_SIZE, - .base.cra_ctxsize = sizeof(struct ccp_ctx), + .base.cra_ctxsize = sizeof(struct ccp_ctx) + CRYPTO_DMA_PADDING, .base.cra_priority = CCP_CRA_PRIORITY, .base.cra_module = THIS_MODULE, }; diff --git a/drivers/crypto/ccp/ccp-crypto-des3.c b/drivers/crypto/ccp/ccp-crypto-des3.c index 278636ed251a..afae30adb703 100644 --- a/drivers/crypto/ccp/ccp-crypto-des3.c +++ b/drivers/crypto/ccp/ccp-crypto-des3.c @@ -21,8 +21,9 @@ static int ccp_des3_complete(struct crypto_async_request *async_req, int ret) { struct skcipher_request *req = skcipher_request_cast(async_req); - struct ccp_ctx *ctx = crypto_tfm_ctx(req->base.tfm); - struct ccp_des3_req_ctx *rctx = skcipher_request_ctx(req); + struct ccp_ctx *ctx = crypto_skcipher_ctx_dma( + crypto_skcipher_reqtfm(req)); + struct ccp_des3_req_ctx *rctx = skcipher_request_ctx_dma(req); if (ret) return ret; @@ -37,7 +38,7 @@ static int ccp_des3_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int key_len) { struct ccp_crypto_skcipher_alg *alg = ccp_crypto_skcipher_alg(tfm); - struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); + struct ccp_ctx *ctx = crypto_skcipher_ctx_dma(tfm); int err; err = verify_skcipher_des3_key(tfm, key); @@ -60,8 +61,8 @@ static int ccp_des3_setkey(struct crypto_skcipher *tfm, const u8 *key, static int ccp_des3_crypt(struct skcipher_request *req, bool encrypt) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); - struct ccp_des3_req_ctx *rctx = skcipher_request_ctx(req); + struct ccp_ctx *ctx = crypto_skcipher_ctx_dma(tfm); + struct ccp_des3_req_ctx *rctx = skcipher_request_ctx_dma(req); struct scatterlist *iv_sg = NULL; unsigned int iv_len = 0; @@ -114,12 +115,12 @@ static int ccp_des3_decrypt(struct skcipher_request *req) static int ccp_des3_init_tfm(struct crypto_skcipher *tfm) { - struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm); + struct ccp_ctx *ctx = crypto_skcipher_ctx_dma(tfm); ctx->complete = ccp_des3_complete; ctx->u.des3.key_len = 0; - crypto_skcipher_set_reqsize(tfm, sizeof(struct ccp_des3_req_ctx)); + crypto_skcipher_set_reqsize_dma(tfm, sizeof(struct ccp_des3_req_ctx)); return 0; } @@ -137,7 +138,7 @@ static const struct skcipher_alg ccp_des3_defaults = { CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_NEED_FALLBACK, .base.cra_blocksize = DES3_EDE_BLOCK_SIZE, - .base.cra_ctxsize = sizeof(struct ccp_ctx), + .base.cra_ctxsize = sizeof(struct ccp_ctx) + CRYPTO_DMA_PADDING, .base.cra_priority = CCP_CRA_PRIORITY, .base.cra_module = THIS_MODULE, }; diff --git a/drivers/crypto/ccp/ccp-crypto-main.c b/drivers/crypto/ccp/ccp-crypto-main.c index dd86d2650bea..73442a382f68 100644 --- a/drivers/crypto/ccp/ccp-crypto-main.c +++ b/drivers/crypto/ccp/ccp-crypto-main.c @@ -139,7 +139,7 @@ static void ccp_crypto_complete(void *data, int err) struct ccp_crypto_cmd *crypto_cmd = data; struct ccp_crypto_cmd *held, *next, *backlog; struct crypto_async_request *req = crypto_cmd->req; - struct ccp_ctx *ctx = crypto_tfm_ctx(req->tfm); + struct ccp_ctx *ctx = crypto_tfm_ctx_dma(req->tfm); int ret; if (err == -EINPROGRESS) { @@ -183,7 +183,7 @@ static void ccp_crypto_complete(void *data, int err) break; /* Error occurred, report it and get the next entry */ - ctx = crypto_tfm_ctx(held->req->tfm); + ctx = crypto_tfm_ctx_dma(held->req->tfm); if (ctx->complete) ret = ctx->complete(held->req, ret); held->req->complete(held->req, ret); diff --git a/drivers/crypto/ccp/ccp-crypto-rsa.c b/drivers/crypto/ccp/ccp-crypto-rsa.c index 1223ac70aea2..a14f85512cf4 100644 --- a/drivers/crypto/ccp/ccp-crypto-rsa.c +++ b/drivers/crypto/ccp/ccp-crypto-rsa.c @@ -44,7 +44,7 @@ static inline int ccp_copy_and_save_keypart(u8 **kpbuf, unsigned int *kplen, static int ccp_rsa_complete(struct crypto_async_request *async_req, int ret) { struct akcipher_request *req = akcipher_request_cast(async_req); - struct ccp_rsa_req_ctx *rctx = akcipher_request_ctx(req); + struct ccp_rsa_req_ctx *rctx = akcipher_request_ctx_dma(req); if (ret) return ret; @@ -56,7 +56,7 @@ static int ccp_rsa_complete(struct crypto_async_request *async_req, int ret) static unsigned int ccp_rsa_maxsize(struct crypto_akcipher *tfm) { - struct ccp_ctx *ctx = akcipher_tfm_ctx(tfm); + struct ccp_ctx *ctx = akcipher_tfm_ctx_dma(tfm); return ctx->u.rsa.n_len; } @@ -64,8 +64,8 @@ static unsigned int ccp_rsa_maxsize(struct crypto_akcipher *tfm) static int ccp_rsa_crypt(struct akcipher_request *req, bool encrypt) { struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); - struct ccp_ctx *ctx = akcipher_tfm_ctx(tfm); - struct ccp_rsa_req_ctx *rctx = akcipher_request_ctx(req); + struct ccp_ctx *ctx = akcipher_tfm_ctx_dma(tfm); + struct ccp_rsa_req_ctx *rctx = akcipher_request_ctx_dma(req); int ret = 0; memset(&rctx->cmd, 0, sizeof(rctx->cmd)); @@ -126,7 +126,7 @@ static void ccp_rsa_free_key_bufs(struct ccp_ctx *ctx) static int ccp_rsa_setkey(struct crypto_akcipher *tfm, const void *key, unsigned int keylen, bool private) { - struct ccp_ctx *ctx = akcipher_tfm_ctx(tfm); + struct ccp_ctx *ctx = akcipher_tfm_ctx_dma(tfm); struct rsa_key raw_key; int ret; @@ -192,9 +192,9 @@ static int ccp_rsa_setpubkey(struct crypto_akcipher *tfm, const void *key, static int ccp_rsa_init_tfm(struct crypto_akcipher *tfm) { - struct ccp_ctx *ctx = akcipher_tfm_ctx(tfm); + struct ccp_ctx *ctx = akcipher_tfm_ctx_dma(tfm); - akcipher_set_reqsize(tfm, sizeof(struct ccp_rsa_req_ctx)); + akcipher_set_reqsize_dma(tfm, sizeof(struct ccp_rsa_req_ctx)); ctx->complete = ccp_rsa_complete; return 0; @@ -202,7 +202,7 @@ static int ccp_rsa_init_tfm(struct crypto_akcipher *tfm) static void ccp_rsa_exit_tfm(struct crypto_akcipher *tfm) { - struct ccp_ctx *ctx = crypto_tfm_ctx(&tfm->base); + struct ccp_ctx *ctx = akcipher_tfm_ctx_dma(tfm); ccp_rsa_free_key_bufs(ctx); } @@ -220,7 +220,7 @@ static struct akcipher_alg ccp_rsa_defaults = { .cra_driver_name = "rsa-ccp", .cra_priority = CCP_CRA_PRIORITY, .cra_module = THIS_MODULE, - .cra_ctxsize = 2 * sizeof(struct ccp_ctx), + .cra_ctxsize = 2 * sizeof(struct ccp_ctx) + CRYPTO_DMA_PADDING, }, }; diff --git a/drivers/crypto/ccp/ccp-crypto-sha.c b/drivers/crypto/ccp/ccp-crypto-sha.c index 74fa5360e722..fa3ae8e78f6f 100644 --- a/drivers/crypto/ccp/ccp-crypto-sha.c +++ b/drivers/crypto/ccp/ccp-crypto-sha.c @@ -28,7 +28,7 @@ static int ccp_sha_complete(struct crypto_async_request *async_req, int ret) { struct ahash_request *req = ahash_request_cast(async_req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct ccp_sha_req_ctx *rctx = ahash_request_ctx(req); + struct ccp_sha_req_ctx *rctx = ahash_request_ctx_dma(req); unsigned int digest_size = crypto_ahash_digestsize(tfm); if (ret) @@ -59,8 +59,8 @@ static int ccp_do_sha_update(struct ahash_request *req, unsigned int nbytes, unsigned int final) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct ccp_ctx *ctx = crypto_ahash_ctx(tfm); - struct ccp_sha_req_ctx *rctx = ahash_request_ctx(req); + struct ccp_ctx *ctx = crypto_ahash_ctx_dma(tfm); + struct ccp_sha_req_ctx *rctx = ahash_request_ctx_dma(req); struct scatterlist *sg; unsigned int block_size = crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm)); @@ -182,8 +182,8 @@ static int ccp_do_sha_update(struct ahash_request *req, unsigned int nbytes, static int ccp_sha_init(struct ahash_request *req) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct ccp_ctx *ctx = crypto_ahash_ctx(tfm); - struct ccp_sha_req_ctx *rctx = ahash_request_ctx(req); + struct ccp_ctx *ctx = crypto_ahash_ctx_dma(tfm); + struct ccp_sha_req_ctx *rctx = ahash_request_ctx_dma(req); struct ccp_crypto_ahash_alg *alg = ccp_crypto_ahash_alg(crypto_ahash_tfm(tfm)); unsigned int block_size = @@ -231,7 +231,7 @@ static int ccp_sha_digest(struct ahash_request *req) static int ccp_sha_export(struct ahash_request *req, void *out) { - struct ccp_sha_req_ctx *rctx = ahash_request_ctx(req); + struct ccp_sha_req_ctx *rctx = ahash_request_ctx_dma(req); struct ccp_sha_exp_ctx state; /* Don't let anything leak to 'out' */ @@ -252,7 +252,7 @@ static int ccp_sha_export(struct ahash_request *req, void *out) static int ccp_sha_import(struct ahash_request *req, const void *in) { - struct ccp_sha_req_ctx *rctx = ahash_request_ctx(req); + struct ccp_sha_req_ctx *rctx = ahash_request_ctx_dma(req); struct ccp_sha_exp_ctx state; /* 'in' may not be aligned so memcpy to local variable */ @@ -272,7 +272,7 @@ static int ccp_sha_import(struct ahash_request *req, const void *in) static int ccp_sha_setkey(struct crypto_ahash *tfm, const u8 *key, unsigned int key_len) { - struct ccp_ctx *ctx = crypto_tfm_ctx(crypto_ahash_tfm(tfm)); + struct ccp_ctx *ctx = crypto_ahash_ctx_dma(tfm); struct crypto_shash *shash = ctx->u.sha.hmac_tfm; unsigned int block_size = crypto_shash_blocksize(shash); unsigned int digest_size = crypto_shash_digestsize(shash); @@ -313,13 +313,13 @@ static int ccp_sha_setkey(struct crypto_ahash *tfm, const u8 *key, static int ccp_sha_cra_init(struct crypto_tfm *tfm) { - struct ccp_ctx *ctx = crypto_tfm_ctx(tfm); struct crypto_ahash *ahash = __crypto_ahash_cast(tfm); + struct ccp_ctx *ctx = crypto_ahash_ctx_dma(ahash); ctx->complete = ccp_sha_complete; ctx->u.sha.key_len = 0; - crypto_ahash_set_reqsize(ahash, sizeof(struct ccp_sha_req_ctx)); + crypto_ahash_set_reqsize_dma(ahash, sizeof(struct ccp_sha_req_ctx)); return 0; } @@ -330,7 +330,7 @@ static void ccp_sha_cra_exit(struct crypto_tfm *tfm) static int ccp_hmac_sha_cra_init(struct crypto_tfm *tfm) { - struct ccp_ctx *ctx = crypto_tfm_ctx(tfm); + struct ccp_ctx *ctx = crypto_tfm_ctx_dma(tfm); struct ccp_crypto_ahash_alg *alg = ccp_crypto_ahash_alg(tfm); struct crypto_shash *hmac_tfm; @@ -348,7 +348,7 @@ static int ccp_hmac_sha_cra_init(struct crypto_tfm *tfm) static void ccp_hmac_sha_cra_exit(struct crypto_tfm *tfm) { - struct ccp_ctx *ctx = crypto_tfm_ctx(tfm); + struct ccp_ctx *ctx = crypto_tfm_ctx_dma(tfm); if (ctx->u.sha.hmac_tfm) crypto_free_shash(ctx->u.sha.hmac_tfm); @@ -492,7 +492,7 @@ static int ccp_register_sha_alg(struct list_head *head, CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_NEED_FALLBACK; base->cra_blocksize = def->block_size; - base->cra_ctxsize = sizeof(struct ccp_ctx); + base->cra_ctxsize = sizeof(struct ccp_ctx) + crypto_dma_padding(); base->cra_priority = CCP_CRA_PRIORITY; base->cra_init = ccp_sha_cra_init; base->cra_exit = ccp_sha_cra_exit; From patchwork Fri Dec 2 09:20:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13062451 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F8CDC47088 for ; Fri, 2 Dec 2022 09:21:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A0D246B0074; Fri, 2 Dec 2022 04:21:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 96F1C6B0075; Fri, 2 Dec 2022 04:21:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 770AC6B0078; Fri, 2 Dec 2022 04:21:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 636EB6B0074 for ; Fri, 2 Dec 2022 04:21:04 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 3C3D54082B for ; Fri, 2 Dec 2022 09:21:04 +0000 (UTC) X-FDA: 80196822048.13.5B08A73 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by imf07.hostedemail.com (Postfix) with ESMTP id 61EB240011 for ; Fri, 2 Dec 2022 09:21:03 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf07.hostedemail.com: domain of herbert@gondor.apana.org.au designates 216.24.177.18 as permitted sender) smtp.mailfrom=herbert@gondor.apana.org.au ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669972864; a=rsa-sha256; cv=none; b=kyRpBmrvV6fbOV1kKsLqJGoEGQJNG/kbn1aTZG4lKWScx4V6Q7Tbe4PhdKhJPiiii9ldgR jGcqB75I+W9YPPNjgW4wwRwlYXeSGQueNB5vqU4vR4x0f0mnzgk4vGHPrw5iQ5kS+9vyUn JV6IbADpZ1jeA0MoTMrbRMADODQwntc= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf07.hostedemail.com: domain of herbert@gondor.apana.org.au designates 216.24.177.18 as permitted sender) smtp.mailfrom=herbert@gondor.apana.org.au ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669972864; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references:references; bh=SOK175IBeN5rJzC+T/aXytPK3MeAWCtshe0SawazyYo=; b=NRXWans/XUuEHIoiboubf8Umekpi+YVkm1YwLmexsfo7a7h/8auFjSRlHw+eN8Z0KqAkl/ IB0AZTK9Atmsyq4HFhUfqY2aOC67Lhbqwo8Fc4wEH/iKccEVC9vf9Zlwg63Na8OV1wv4iC qMe5/JrAxXk6JAlzuz0wMXHzwt/8ZUg= Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1p12Dz-003AoK-7J; Fri, 02 Dec 2022 17:20:52 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Fri, 02 Dec 2022 17:20:51 +0800 From: "Herbert Xu" Date: Fri, 02 Dec 2022 17:20:51 +0800 Subject: [PATCH 3/10] crypto: ccree - Set DMA alignment explicitly References: To: Catalin Marinas , Ard Biesheuvel , Will Deacon , Marc Zyngier , Arnd Bergmann , Greg Kroah-Hartman , Andrew Morton , Linus Torvalds , Linux Memory Management List , Linux ARM , Linux Kernel Mailing List , "David S. Miller" , Linux Crypto Mailing List Message-Id: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 61EB240011 X-Stat-Signature: ka5nir5qs3fkzn1sgzd6nnu5b5nanyw7 X-Spamd-Result: default: False [-3.20 / 9.00]; BAYES_HAM(-3.00)[99.99%]; R_SPF_ALLOW(-0.20)[+mx]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; FROM_EQ_ENVFROM(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; R_DKIM_NA(0.00)[]; TO_DN_ALL(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; DMARC_NA(0.00)[apana.org.au]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-Rspam-User: X-HE-Tag: 1669972863-74159 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This driver has been implicitly relying on kmalloc alignment to be sufficient for DMA. This may no longer be the case with upcoming arm64 changes. This patch changes it to explicitly request DMA alignment from the Crypto API. Signed-off-by: Herbert Xu --- drivers/crypto/ccree/cc_aead.c | 62 ++++++++++++------------- drivers/crypto/ccree/cc_buffer_mgr.c | 18 +++---- drivers/crypto/ccree/cc_hash.c | 86 +++++++++++++++++------------------ 3 files changed, 83 insertions(+), 83 deletions(-) diff --git a/drivers/crypto/ccree/cc_aead.c b/drivers/crypto/ccree/cc_aead.c index 35794c7271fb..109ffb375fc6 100644 --- a/drivers/crypto/ccree/cc_aead.c +++ b/drivers/crypto/ccree/cc_aead.c @@ -138,7 +138,7 @@ static int cc_aead_init(struct crypto_aead *tfm) ctx->flow_mode = cc_alg->flow_mode; ctx->auth_mode = cc_alg->auth_mode; ctx->drvdata = cc_alg->drvdata; - crypto_aead_set_reqsize(tfm, sizeof(struct aead_req_ctx)); + crypto_aead_set_reqsize_dma(tfm, sizeof(struct aead_req_ctx)); /* Allocate key buffer, cache line aligned */ ctx->enckey = dma_alloc_coherent(dev, AES_MAX_KEY_SIZE, @@ -208,7 +208,7 @@ static int cc_aead_init(struct crypto_aead *tfm) static void cc_aead_complete(struct device *dev, void *cc_req, int err) { struct aead_request *areq = (struct aead_request *)cc_req; - struct aead_req_ctx *areq_ctx = aead_request_ctx(areq); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(areq); struct crypto_aead *tfm = crypto_aead_reqtfm(cc_req); struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm); @@ -723,7 +723,7 @@ static void cc_set_assoc_desc(struct aead_request *areq, unsigned int flow_mode, { struct crypto_aead *tfm = crypto_aead_reqtfm(areq); struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm); - struct aead_req_ctx *areq_ctx = aead_request_ctx(areq); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(areq); enum cc_req_dma_buf_type assoc_dma_type = areq_ctx->assoc_buff_type; unsigned int idx = *seq_size; struct device *dev = drvdata_to_dev(ctx->drvdata); @@ -762,7 +762,7 @@ static void cc_proc_authen_desc(struct aead_request *areq, struct cc_hw_desc desc[], unsigned int *seq_size, int direct) { - struct aead_req_ctx *areq_ctx = aead_request_ctx(areq); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(areq); enum cc_req_dma_buf_type data_dma_type = areq_ctx->data_buff_type; unsigned int idx = *seq_size; struct crypto_aead *tfm = crypto_aead_reqtfm(areq); @@ -827,7 +827,7 @@ static void cc_proc_cipher_desc(struct aead_request *areq, unsigned int *seq_size) { unsigned int idx = *seq_size; - struct aead_req_ctx *areq_ctx = aead_request_ctx(areq); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(areq); enum cc_req_dma_buf_type data_dma_type = areq_ctx->data_buff_type; struct crypto_aead *tfm = crypto_aead_reqtfm(areq); struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm); @@ -873,7 +873,7 @@ static void cc_proc_digest_desc(struct aead_request *req, { struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm); - struct aead_req_ctx *req_ctx = aead_request_ctx(req); + struct aead_req_ctx *req_ctx = aead_request_ctx_dma(req); unsigned int idx = *seq_size; unsigned int hash_mode = (ctx->auth_mode == DRV_HASH_SHA1) ? DRV_HASH_HW_SHA1 : DRV_HASH_HW_SHA256; @@ -923,7 +923,7 @@ static void cc_set_cipher_desc(struct aead_request *req, { struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm); - struct aead_req_ctx *req_ctx = aead_request_ctx(req); + struct aead_req_ctx *req_ctx = aead_request_ctx_dma(req); unsigned int hw_iv_size = req_ctx->hw_iv_size; unsigned int idx = *seq_size; int direct = req_ctx->gen_ctx.op_type; @@ -965,7 +965,7 @@ static void cc_set_cipher_desc(struct aead_request *req, static void cc_proc_cipher(struct aead_request *req, struct cc_hw_desc desc[], unsigned int *seq_size, unsigned int data_flow_mode) { - struct aead_req_ctx *req_ctx = aead_request_ctx(req); + struct aead_req_ctx *req_ctx = aead_request_ctx_dma(req); int direct = req_ctx->gen_ctx.op_type; unsigned int idx = *seq_size; @@ -1082,7 +1082,7 @@ static void cc_proc_header_desc(struct aead_request *req, struct cc_hw_desc desc[], unsigned int *seq_size) { - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); unsigned int idx = *seq_size; /* Hash associated data */ @@ -1158,7 +1158,7 @@ static void cc_proc_scheme_desc(struct aead_request *req, static void cc_mlli_to_sram(struct aead_request *req, struct cc_hw_desc desc[], unsigned int *seq_size) { - struct aead_req_ctx *req_ctx = aead_request_ctx(req); + struct aead_req_ctx *req_ctx = aead_request_ctx_dma(req); struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm); struct device *dev = drvdata_to_dev(ctx->drvdata); @@ -1212,7 +1212,7 @@ static void cc_hmac_authenc(struct aead_request *req, struct cc_hw_desc desc[], { struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm); - struct aead_req_ctx *req_ctx = aead_request_ctx(req); + struct aead_req_ctx *req_ctx = aead_request_ctx_dma(req); int direct = req_ctx->gen_ctx.op_type; unsigned int data_flow_mode = cc_get_data_flow(direct, ctx->flow_mode, @@ -1265,7 +1265,7 @@ cc_xcbc_authenc(struct aead_request *req, struct cc_hw_desc desc[], { struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm); - struct aead_req_ctx *req_ctx = aead_request_ctx(req); + struct aead_req_ctx *req_ctx = aead_request_ctx_dma(req); int direct = req_ctx->gen_ctx.op_type; unsigned int data_flow_mode = cc_get_data_flow(direct, ctx->flow_mode, @@ -1312,7 +1312,7 @@ static int validate_data_size(struct cc_aead_ctx *ctx, enum drv_crypto_direction direct, struct aead_request *req) { - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); struct device *dev = drvdata_to_dev(ctx->drvdata); unsigned int assoclen = areq_ctx->assoclen; unsigned int cipherlen = (direct == DRV_CRYPTO_DIRECTION_DECRYPT) ? @@ -1411,7 +1411,7 @@ static int cc_ccm(struct aead_request *req, struct cc_hw_desc desc[], { struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm); - struct aead_req_ctx *req_ctx = aead_request_ctx(req); + struct aead_req_ctx *req_ctx = aead_request_ctx_dma(req); unsigned int idx = *seq_size; unsigned int cipher_flow_mode; dma_addr_t mac_result; @@ -1533,7 +1533,7 @@ static int config_ccm_adata(struct aead_request *req) struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm); struct device *dev = drvdata_to_dev(ctx->drvdata); - struct aead_req_ctx *req_ctx = aead_request_ctx(req); + struct aead_req_ctx *req_ctx = aead_request_ctx_dma(req); //unsigned int size_of_a = 0, rem_a_size = 0; unsigned int lp = req->iv[0]; /* Note: The code assume that req->iv[0] already contains the value @@ -1591,7 +1591,7 @@ static void cc_proc_rfc4309_ccm(struct aead_request *req) { struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm); - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); /* L' */ memset(areq_ctx->ctr_iv, 0, AES_BLOCK_SIZE); @@ -1615,7 +1615,7 @@ static void cc_set_ghash_desc(struct aead_request *req, { struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm); - struct aead_req_ctx *req_ctx = aead_request_ctx(req); + struct aead_req_ctx *req_ctx = aead_request_ctx_dma(req); unsigned int idx = *seq_size; /* load key to AES*/ @@ -1693,7 +1693,7 @@ static void cc_set_gctr_desc(struct aead_request *req, struct cc_hw_desc desc[], { struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm); - struct aead_req_ctx *req_ctx = aead_request_ctx(req); + struct aead_req_ctx *req_ctx = aead_request_ctx_dma(req); unsigned int idx = *seq_size; /* load key to AES*/ @@ -1730,7 +1730,7 @@ static void cc_proc_gcm_result(struct aead_request *req, { struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm); - struct aead_req_ctx *req_ctx = aead_request_ctx(req); + struct aead_req_ctx *req_ctx = aead_request_ctx_dma(req); dma_addr_t mac_result; unsigned int idx = *seq_size; @@ -1792,7 +1792,7 @@ static void cc_proc_gcm_result(struct aead_request *req, static int cc_gcm(struct aead_request *req, struct cc_hw_desc desc[], unsigned int *seq_size) { - struct aead_req_ctx *req_ctx = aead_request_ctx(req); + struct aead_req_ctx *req_ctx = aead_request_ctx_dma(req); unsigned int cipher_flow_mode; //in RFC4543 no data to encrypt. just copy data from src to dest. @@ -1830,7 +1830,7 @@ static int config_gcm_context(struct aead_request *req) { struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm); - struct aead_req_ctx *req_ctx = aead_request_ctx(req); + struct aead_req_ctx *req_ctx = aead_request_ctx_dma(req); struct device *dev = drvdata_to_dev(ctx->drvdata); unsigned int cryptlen = (req_ctx->gen_ctx.op_type == @@ -1879,7 +1879,7 @@ static void cc_proc_rfc4_gcm(struct aead_request *req) { struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm); - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); memcpy(areq_ctx->ctr_iv + GCM_BLOCK_RFC4_NONCE_OFFSET, ctx->ctr_nonce, GCM_BLOCK_RFC4_NONCE_SIZE); @@ -1896,7 +1896,7 @@ static int cc_proc_aead(struct aead_request *req, struct cc_hw_desc desc[MAX_AEAD_PROCESS_SEQ]; struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm); - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); struct device *dev = drvdata_to_dev(ctx->drvdata); struct cc_crypto_req cc_req = {}; @@ -2019,7 +2019,7 @@ static int cc_proc_aead(struct aead_request *req, static int cc_aead_encrypt(struct aead_request *req) { - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); int rc; memset(areq_ctx, 0, sizeof(*areq_ctx)); @@ -2039,7 +2039,7 @@ static int cc_rfc4309_ccm_encrypt(struct aead_request *req) { /* Very similar to cc_aead_encrypt() above. */ - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); int rc; rc = crypto_ipsec_check_assoclen(req->assoclen); @@ -2063,7 +2063,7 @@ static int cc_rfc4309_ccm_encrypt(struct aead_request *req) static int cc_aead_decrypt(struct aead_request *req) { - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); int rc; memset(areq_ctx, 0, sizeof(*areq_ctx)); @@ -2081,7 +2081,7 @@ static int cc_aead_decrypt(struct aead_request *req) static int cc_rfc4309_ccm_decrypt(struct aead_request *req) { - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); int rc; rc = crypto_ipsec_check_assoclen(req->assoclen); @@ -2193,7 +2193,7 @@ static int cc_rfc4543_gcm_setauthsize(struct crypto_aead *authenc, static int cc_rfc4106_gcm_encrypt(struct aead_request *req) { - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); int rc; rc = crypto_ipsec_check_assoclen(req->assoclen); @@ -2217,7 +2217,7 @@ static int cc_rfc4106_gcm_encrypt(struct aead_request *req) static int cc_rfc4543_gcm_encrypt(struct aead_request *req) { - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); int rc; rc = crypto_ipsec_check_assoclen(req->assoclen); @@ -2244,7 +2244,7 @@ static int cc_rfc4543_gcm_encrypt(struct aead_request *req) static int cc_rfc4106_gcm_decrypt(struct aead_request *req) { - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); int rc; rc = crypto_ipsec_check_assoclen(req->assoclen); @@ -2268,7 +2268,7 @@ static int cc_rfc4106_gcm_decrypt(struct aead_request *req) static int cc_rfc4543_gcm_decrypt(struct aead_request *req) { - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); int rc; rc = crypto_ipsec_check_assoclen(req->assoclen); diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c index 9efd88f871d1..bcca55bff910 100644 --- a/drivers/crypto/ccree/cc_buffer_mgr.c +++ b/drivers/crypto/ccree/cc_buffer_mgr.c @@ -52,7 +52,7 @@ static inline char *cc_dma_buf_type(enum cc_req_dma_buf_type type) static void cc_copy_mac(struct device *dev, struct aead_request *req, enum cc_sg_cpy_direct dir) { - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); u32 skip = req->assoclen + req->cryptlen; cc_copy_sg_portion(dev, areq_ctx->backup_mac, req->src, @@ -456,7 +456,7 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx, void cc_unmap_aead_request(struct device *dev, struct aead_request *req) { - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); unsigned int hw_iv_size = areq_ctx->hw_iv_size; struct cc_drvdata *drvdata = dev_get_drvdata(dev); int src_direction = (req->src != req->dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL); @@ -546,7 +546,7 @@ static int cc_aead_chain_iv(struct cc_drvdata *drvdata, struct buffer_array *sg_data, bool is_last, bool do_chain) { - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); unsigned int hw_iv_size = areq_ctx->hw_iv_size; struct device *dev = drvdata_to_dev(drvdata); gfp_t flags = cc_gfp_flags(&req->base); @@ -586,7 +586,7 @@ static int cc_aead_chain_assoc(struct cc_drvdata *drvdata, struct buffer_array *sg_data, bool is_last, bool do_chain) { - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); int rc = 0; int mapped_nents = 0; struct device *dev = drvdata_to_dev(drvdata); @@ -652,7 +652,7 @@ static int cc_aead_chain_assoc(struct cc_drvdata *drvdata, static void cc_prepare_aead_data_dlli(struct aead_request *req, u32 *src_last_bytes, u32 *dst_last_bytes) { - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); enum drv_crypto_direction direct = areq_ctx->gen_ctx.op_type; unsigned int authsize = areq_ctx->req_authsize; struct scatterlist *sg; @@ -678,7 +678,7 @@ static void cc_prepare_aead_data_mlli(struct cc_drvdata *drvdata, u32 *src_last_bytes, u32 *dst_last_bytes, bool is_last_table) { - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); enum drv_crypto_direction direct = areq_ctx->gen_ctx.op_type; unsigned int authsize = areq_ctx->req_authsize; struct device *dev = drvdata_to_dev(drvdata); @@ -790,7 +790,7 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata, struct buffer_array *sg_data, bool is_last_table, bool do_chain) { - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); struct device *dev = drvdata_to_dev(drvdata); enum drv_crypto_direction direct = areq_ctx->gen_ctx.op_type; unsigned int authsize = areq_ctx->req_authsize; @@ -895,7 +895,7 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata, static void cc_update_aead_mlli_nents(struct cc_drvdata *drvdata, struct aead_request *req) { - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); u32 curr_mlli_size = 0; if (areq_ctx->assoc_buff_type == CC_DMA_BUF_MLLI) { @@ -945,7 +945,7 @@ static void cc_update_aead_mlli_nents(struct cc_drvdata *drvdata, int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req) { - struct aead_req_ctx *areq_ctx = aead_request_ctx(req); + struct aead_req_ctx *areq_ctx = aead_request_ctx_dma(req); struct mlli_params *mlli_params = &areq_ctx->mlli_params; struct device *dev = drvdata_to_dev(drvdata); struct buffer_array sg_data; diff --git a/drivers/crypto/ccree/cc_hash.c b/drivers/crypto/ccree/cc_hash.c index 683c9a430e11..f418162932fe 100644 --- a/drivers/crypto/ccree/cc_hash.c +++ b/drivers/crypto/ccree/cc_hash.c @@ -283,9 +283,9 @@ static void cc_unmap_result(struct device *dev, struct ahash_req_ctx *state, static void cc_update_complete(struct device *dev, void *cc_req, int err) { struct ahash_request *req = (struct ahash_request *)cc_req; - struct ahash_req_ctx *state = ahash_request_ctx(req); + struct ahash_req_ctx *state = ahash_request_ctx_dma(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(tfm); dev_dbg(dev, "req=%pK\n", req); @@ -301,9 +301,9 @@ static void cc_update_complete(struct device *dev, void *cc_req, int err) static void cc_digest_complete(struct device *dev, void *cc_req, int err) { struct ahash_request *req = (struct ahash_request *)cc_req; - struct ahash_req_ctx *state = ahash_request_ctx(req); + struct ahash_req_ctx *state = ahash_request_ctx_dma(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(tfm); u32 digestsize = crypto_ahash_digestsize(tfm); dev_dbg(dev, "req=%pK\n", req); @@ -321,9 +321,9 @@ static void cc_digest_complete(struct device *dev, void *cc_req, int err) static void cc_hash_complete(struct device *dev, void *cc_req, int err) { struct ahash_request *req = (struct ahash_request *)cc_req; - struct ahash_req_ctx *state = ahash_request_ctx(req); + struct ahash_req_ctx *state = ahash_request_ctx_dma(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(tfm); u32 digestsize = crypto_ahash_digestsize(tfm); dev_dbg(dev, "req=%pK\n", req); @@ -341,9 +341,9 @@ static void cc_hash_complete(struct device *dev, void *cc_req, int err) static int cc_fin_result(struct cc_hw_desc *desc, struct ahash_request *req, int idx) { - struct ahash_req_ctx *state = ahash_request_ctx(req); + struct ahash_req_ctx *state = ahash_request_ctx_dma(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(tfm); u32 digestsize = crypto_ahash_digestsize(tfm); /* Get final MAC result */ @@ -364,9 +364,9 @@ static int cc_fin_result(struct cc_hw_desc *desc, struct ahash_request *req, static int cc_fin_hmac(struct cc_hw_desc *desc, struct ahash_request *req, int idx) { - struct ahash_req_ctx *state = ahash_request_ctx(req); + struct ahash_req_ctx *state = ahash_request_ctx_dma(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(tfm); u32 digestsize = crypto_ahash_digestsize(tfm); /* store the hash digest result in the context */ @@ -417,9 +417,9 @@ static int cc_fin_hmac(struct cc_hw_desc *desc, struct ahash_request *req, static int cc_hash_digest(struct ahash_request *req) { - struct ahash_req_ctx *state = ahash_request_ctx(req); + struct ahash_req_ctx *state = ahash_request_ctx_dma(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(tfm); u32 digestsize = crypto_ahash_digestsize(tfm); struct scatterlist *src = req->src; unsigned int nbytes = req->nbytes; @@ -555,9 +555,9 @@ static int cc_restore_hash(struct cc_hw_desc *desc, struct cc_hash_ctx *ctx, static int cc_hash_update(struct ahash_request *req) { - struct ahash_req_ctx *state = ahash_request_ctx(req); + struct ahash_req_ctx *state = ahash_request_ctx_dma(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(tfm); unsigned int block_size = crypto_tfm_alg_blocksize(&tfm->base); struct scatterlist *src = req->src; unsigned int nbytes = req->nbytes; @@ -631,9 +631,9 @@ static int cc_hash_update(struct ahash_request *req) static int cc_do_finup(struct ahash_request *req, bool update) { - struct ahash_req_ctx *state = ahash_request_ctx(req); + struct ahash_req_ctx *state = ahash_request_ctx_dma(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(tfm); u32 digestsize = crypto_ahash_digestsize(tfm); struct scatterlist *src = req->src; unsigned int nbytes = req->nbytes; @@ -711,9 +711,9 @@ static int cc_hash_final(struct ahash_request *req) static int cc_hash_init(struct ahash_request *req) { - struct ahash_req_ctx *state = ahash_request_ctx(req); + struct ahash_req_ctx *state = ahash_request_ctx_dma(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(tfm); struct device *dev = drvdata_to_dev(ctx->drvdata); dev_dbg(dev, "===== init (%d) ====\n", req->nbytes); @@ -736,7 +736,7 @@ static int cc_hash_setkey(struct crypto_ahash *ahash, const u8 *key, u32 larval_addr; struct device *dev; - ctx = crypto_ahash_ctx(ahash); + ctx = crypto_ahash_ctx_dma(ahash); dev = drvdata_to_dev(ctx->drvdata); dev_dbg(dev, "start keylen: %d", keylen); @@ -922,7 +922,7 @@ static int cc_xcbc_setkey(struct crypto_ahash *ahash, const u8 *key, unsigned int keylen) { struct cc_crypto_req cc_req = {}; - struct cc_hash_ctx *ctx = crypto_ahash_ctx(ahash); + struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(ahash); struct device *dev = drvdata_to_dev(ctx->drvdata); int rc = 0; unsigned int idx = 0; @@ -1007,7 +1007,7 @@ static int cc_xcbc_setkey(struct crypto_ahash *ahash, static int cc_cmac_setkey(struct crypto_ahash *ahash, const u8 *key, unsigned int keylen) { - struct cc_hash_ctx *ctx = crypto_ahash_ctx(ahash); + struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(ahash); struct device *dev = drvdata_to_dev(ctx->drvdata); dev_dbg(dev, "===== setkey (%d) ====\n", keylen); @@ -1109,7 +1109,7 @@ static int cc_alloc_ctx(struct cc_hash_ctx *ctx) static int cc_get_hash_len(struct crypto_tfm *tfm) { - struct cc_hash_ctx *ctx = crypto_tfm_ctx(tfm); + struct cc_hash_ctx *ctx = crypto_tfm_ctx_dma(tfm); if (ctx->hash_mode == DRV_HASH_SM3) return CC_SM3_HASH_LEN_SIZE; @@ -1119,7 +1119,7 @@ static int cc_get_hash_len(struct crypto_tfm *tfm) static int cc_cra_init(struct crypto_tfm *tfm) { - struct cc_hash_ctx *ctx = crypto_tfm_ctx(tfm); + struct cc_hash_ctx *ctx = crypto_tfm_ctx_dma(tfm); struct hash_alg_common *hash_alg_common = container_of(tfm->__crt_alg, struct hash_alg_common, base); struct ahash_alg *ahash_alg = @@ -1127,8 +1127,8 @@ static int cc_cra_init(struct crypto_tfm *tfm) struct cc_hash_alg *cc_alg = container_of(ahash_alg, struct cc_hash_alg, ahash_alg); - crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm), - sizeof(struct ahash_req_ctx)); + crypto_ahash_set_reqsize_dma(__crypto_ahash_cast(tfm), + sizeof(struct ahash_req_ctx)); ctx->hash_mode = cc_alg->hash_mode; ctx->hw_mode = cc_alg->hw_mode; @@ -1140,7 +1140,7 @@ static int cc_cra_init(struct crypto_tfm *tfm) static void cc_cra_exit(struct crypto_tfm *tfm) { - struct cc_hash_ctx *ctx = crypto_tfm_ctx(tfm); + struct cc_hash_ctx *ctx = crypto_tfm_ctx_dma(tfm); struct device *dev = drvdata_to_dev(ctx->drvdata); dev_dbg(dev, "cc_cra_exit"); @@ -1149,9 +1149,9 @@ static void cc_cra_exit(struct crypto_tfm *tfm) static int cc_mac_update(struct ahash_request *req) { - struct ahash_req_ctx *state = ahash_request_ctx(req); + struct ahash_req_ctx *state = ahash_request_ctx_dma(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(tfm); struct device *dev = drvdata_to_dev(ctx->drvdata); unsigned int block_size = crypto_tfm_alg_blocksize(&tfm->base); struct cc_crypto_req cc_req = {}; @@ -1217,9 +1217,9 @@ static int cc_mac_update(struct ahash_request *req) static int cc_mac_final(struct ahash_request *req) { - struct ahash_req_ctx *state = ahash_request_ctx(req); + struct ahash_req_ctx *state = ahash_request_ctx_dma(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(tfm); struct device *dev = drvdata_to_dev(ctx->drvdata); struct cc_crypto_req cc_req = {}; struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN]; @@ -1338,9 +1338,9 @@ static int cc_mac_final(struct ahash_request *req) static int cc_mac_finup(struct ahash_request *req) { - struct ahash_req_ctx *state = ahash_request_ctx(req); + struct ahash_req_ctx *state = ahash_request_ctx_dma(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(tfm); struct device *dev = drvdata_to_dev(ctx->drvdata); struct cc_crypto_req cc_req = {}; struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN]; @@ -1419,9 +1419,9 @@ static int cc_mac_finup(struct ahash_request *req) static int cc_mac_digest(struct ahash_request *req) { - struct ahash_req_ctx *state = ahash_request_ctx(req); + struct ahash_req_ctx *state = ahash_request_ctx_dma(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(tfm); struct device *dev = drvdata_to_dev(ctx->drvdata); u32 digestsize = crypto_ahash_digestsize(tfm); struct cc_crypto_req cc_req = {}; @@ -1499,8 +1499,8 @@ static int cc_mac_digest(struct ahash_request *req) static int cc_hash_export(struct ahash_request *req, void *out) { struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); - struct cc_hash_ctx *ctx = crypto_ahash_ctx(ahash); - struct ahash_req_ctx *state = ahash_request_ctx(req); + struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(ahash); + struct ahash_req_ctx *state = ahash_request_ctx_dma(req); u8 *curr_buff = cc_hash_buf(state); u32 curr_buff_cnt = *cc_hash_buf_cnt(state); const u32 tmp = CC_EXPORT_MAGIC; @@ -1525,9 +1525,9 @@ static int cc_hash_export(struct ahash_request *req, void *out) static int cc_hash_import(struct ahash_request *req, const void *in) { struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); - struct cc_hash_ctx *ctx = crypto_ahash_ctx(ahash); + struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(ahash); struct device *dev = drvdata_to_dev(ctx->drvdata); - struct ahash_req_ctx *state = ahash_request_ctx(req); + struct ahash_req_ctx *state = ahash_request_ctx_dma(req); u32 tmp; memcpy(&tmp, in, sizeof(u32)); @@ -1846,7 +1846,7 @@ static struct cc_hash_alg *cc_alloc_hash_alg(struct cc_hash_template *template, template->driver_name); } alg->cra_module = THIS_MODULE; - alg->cra_ctxsize = sizeof(struct cc_hash_ctx); + alg->cra_ctxsize = sizeof(struct cc_hash_ctx) + crypto_dma_padding(); alg->cra_priority = CC_CRA_PRIO; alg->cra_blocksize = template->blocksize; alg->cra_alignmask = 0; @@ -2073,9 +2073,9 @@ static void cc_setup_xcbc(struct ahash_request *areq, struct cc_hw_desc desc[], unsigned int *seq_size) { unsigned int idx = *seq_size; - struct ahash_req_ctx *state = ahash_request_ctx(areq); + struct ahash_req_ctx *state = ahash_request_ctx_dma(areq); struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); - struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(tfm); /* Setup XCBC MAC K1 */ hw_desc_init(&desc[idx]); @@ -2130,9 +2130,9 @@ static void cc_setup_cmac(struct ahash_request *areq, struct cc_hw_desc desc[], unsigned int *seq_size) { unsigned int idx = *seq_size; - struct ahash_req_ctx *state = ahash_request_ctx(areq); + struct ahash_req_ctx *state = ahash_request_ctx_dma(areq); struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); - struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(tfm); /* Setup CMAC Key */ hw_desc_init(&desc[idx]); From patchwork Fri Dec 2 09:20:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13062452 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAB74C4708D for ; Fri, 2 Dec 2022 09:21:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 355736B0075; Fri, 2 Dec 2022 04:21:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2B54B6B0078; Fri, 2 Dec 2022 04:21:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 06B0E6B007B; Fri, 2 Dec 2022 04:21:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id ED3FF6B0075 for ; Fri, 2 Dec 2022 04:21:04 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id CF6B340AB1 for ; Fri, 2 Dec 2022 09:21:04 +0000 (UTC) X-FDA: 80196822048.05.F1FF641 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by imf11.hostedemail.com (Postfix) with ESMTP id 0857F40013 for ; Fri, 2 Dec 2022 09:21:03 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf11.hostedemail.com: domain of herbert@gondor.apana.org.au designates 216.24.177.18 as permitted sender) smtp.mailfrom=herbert@gondor.apana.org.au ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669972864; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references:references; bh=V6bA+tf5cu7ZbcLLyJCjqkesQoFGfdByeoeZM58hIRE=; b=g5sAtFcB4c77uHi1sMiuaG6h1zNP2Tgg8iq0Pq1A9UwpGKc4ag4SqsoDwm8tyieQ8qkwvU 6uffve2VZN/7ux5Xb+po1IoWWWJjoukjDxWEzwlvzAtgbQLrwGL8mxAKgaL/LIHIhPHeLM umYpTNNzpR7gh+V5lTw55ZX4+PW4dE4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669972864; a=rsa-sha256; cv=none; b=aot075No+u+PERDmTespr1GnccRJbDIg5rHsPMICUehHPCRWuxR1Rb3kN3Rzqg4neSgrNd Lru/7ZyXo9dka9UtQSkRRE/T2a9i/CBJQ2NEb+ZVfOkirSAnQqBy7p7A95e3CxQmsBbnbV KSWlFTNsdgJMpnN5S0FwCEKA4bZlH7s= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf11.hostedemail.com: domain of herbert@gondor.apana.org.au designates 216.24.177.18 as permitted sender) smtp.mailfrom=herbert@gondor.apana.org.au Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1p12E1-003AoY-9l; Fri, 02 Dec 2022 17:20:54 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Fri, 02 Dec 2022 17:20:53 +0800 From: "Herbert Xu" Date: Fri, 02 Dec 2022 17:20:53 +0800 Subject: [PATCH 4/10] crypto: chelsio - Set DMA alignment explicitly References: To: Catalin Marinas , Ard Biesheuvel , Will Deacon , Marc Zyngier , Arnd Bergmann , Greg Kroah-Hartman , Andrew Morton , Linus Torvalds , Linux Memory Management List , Linux ARM , Linux Kernel Mailing List , "David S. Miller" , Linux Crypto Mailing List Message-Id: X-Stat-Signature: f5toezcr69jb4wmuot69zn75u7fdxp71 X-Spamd-Result: default: False [-3.20 / 9.00]; BAYES_HAM(-3.00)[99.99%]; R_SPF_ALLOW(-0.20)[+mx]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; FROM_EQ_ENVFROM(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; R_DKIM_NA(0.00)[]; TO_DN_ALL(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; DMARC_NA(0.00)[apana.org.au]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-Rspamd-Queue-Id: 0857F40013 X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1669972863-488094 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This driver has been implicitly relying on kmalloc alignment to be sufficient for DMA. This may no longer be the case with upcoming arm64 changes. This patch changes it to explicitly request DMA alignment from the Crypto API. Signed-off-by: Herbert Xu --- drivers/crypto/chelsio/chcr_algo.c | 43 ++++++++++++++++++------------------- 1 file changed, 22 insertions(+), 21 deletions(-) diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c index 6933546f87b1..2158bc237dbb 100644 --- a/drivers/crypto/chelsio/chcr_algo.c +++ b/drivers/crypto/chelsio/chcr_algo.c @@ -210,7 +210,7 @@ static inline int chcr_handle_aead_resp(struct aead_request *req, unsigned char *input, int err) { - struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); + struct chcr_aead_reqctx *reqctx = aead_request_ctx_dma(req); struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct chcr_dev *dev = a_ctx(tfm)->dev; @@ -718,7 +718,7 @@ static inline int get_qidxs(struct crypto_async_request *req, { struct aead_request *aead_req = container_of(req, struct aead_request, base); - struct chcr_aead_reqctx *reqctx = aead_request_ctx(aead_req); + struct chcr_aead_reqctx *reqctx = aead_request_ctx_dma(aead_req); *txqidx = reqctx->txqidx; *rxqidx = reqctx->rxqidx; break; @@ -2362,7 +2362,7 @@ static void chcr_hmac_cra_exit(struct crypto_tfm *tfm) inline void chcr_aead_common_exit(struct aead_request *req) { - struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); + struct chcr_aead_reqctx *reqctx = aead_request_ctx_dma(req); struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct uld_ctx *u_ctx = ULD_CTX(a_ctx(tfm)); @@ -2373,7 +2373,7 @@ static int chcr_aead_common_init(struct aead_request *req) { struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct chcr_aead_ctx *aeadctx = AEAD_CTX(a_ctx(tfm)); - struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); + struct chcr_aead_reqctx *reqctx = aead_request_ctx_dma(req); unsigned int authsize = crypto_aead_authsize(tfm); int error = -EINVAL; @@ -2417,7 +2417,7 @@ static int chcr_aead_fallback(struct aead_request *req, unsigned short op_type) { struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct chcr_aead_ctx *aeadctx = AEAD_CTX(a_ctx(tfm)); - struct aead_request *subreq = aead_request_ctx(req); + struct aead_request *subreq = aead_request_ctx_dma(req); aead_request_set_tfm(subreq, aeadctx->sw_cipher); aead_request_set_callback(subreq, req->base.flags, @@ -2438,7 +2438,7 @@ static struct sk_buff *create_authenc_wr(struct aead_request *req, struct uld_ctx *u_ctx = ULD_CTX(ctx); struct chcr_aead_ctx *aeadctx = AEAD_CTX(ctx); struct chcr_authenc_ctx *actx = AUTHENC_CTX(aeadctx); - struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); + struct chcr_aead_reqctx *reqctx = aead_request_ctx_dma(req); struct sk_buff *skb = NULL; struct chcr_wr *chcr_req; struct cpl_rx_phys_dsgl *phys_cpl; @@ -2576,7 +2576,7 @@ int chcr_aead_dma_map(struct device *dev, unsigned short op_type) { int error; - struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); + struct chcr_aead_reqctx *reqctx = aead_request_ctx_dma(req); struct crypto_aead *tfm = crypto_aead_reqtfm(req); unsigned int authsize = crypto_aead_authsize(tfm); int src_len, dst_len; @@ -2637,7 +2637,7 @@ void chcr_aead_dma_unmap(struct device *dev, struct aead_request *req, unsigned short op_type) { - struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); + struct chcr_aead_reqctx *reqctx = aead_request_ctx_dma(req); struct crypto_aead *tfm = crypto_aead_reqtfm(req); unsigned int authsize = crypto_aead_authsize(tfm); int src_len, dst_len; @@ -2678,7 +2678,7 @@ void chcr_add_aead_src_ent(struct aead_request *req, struct ulptx_sgl *ulptx) { struct ulptx_walk ulp_walk; - struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); + struct chcr_aead_reqctx *reqctx = aead_request_ctx_dma(req); if (reqctx->imm) { u8 *buf = (u8 *)ulptx; @@ -2704,7 +2704,7 @@ void chcr_add_aead_dst_ent(struct aead_request *req, struct cpl_rx_phys_dsgl *phys_cpl, unsigned short qid) { - struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); + struct chcr_aead_reqctx *reqctx = aead_request_ctx_dma(req); struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct dsgl_walk dsgl_walk; unsigned int authsize = crypto_aead_authsize(tfm); @@ -2894,7 +2894,7 @@ static int generate_b0(struct aead_request *req, u8 *ivptr, unsigned int l, lp, m; int rc; struct crypto_aead *aead = crypto_aead_reqtfm(req); - struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); + struct chcr_aead_reqctx *reqctx = aead_request_ctx_dma(req); u8 *b0 = reqctx->scratch_pad; m = crypto_aead_authsize(aead); @@ -2932,7 +2932,7 @@ static int ccm_format_packet(struct aead_request *req, unsigned short op_type, unsigned int assoclen) { - struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); + struct chcr_aead_reqctx *reqctx = aead_request_ctx_dma(req); struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct chcr_aead_ctx *aeadctx = AEAD_CTX(a_ctx(tfm)); int rc = 0; @@ -2963,7 +2963,7 @@ static void fill_sec_cpl_for_aead(struct cpl_tx_sec_pdu *sec_cpl, struct chcr_context *ctx = a_ctx(tfm); struct uld_ctx *u_ctx = ULD_CTX(ctx); struct chcr_aead_ctx *aeadctx = AEAD_CTX(ctx); - struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); + struct chcr_aead_reqctx *reqctx = aead_request_ctx_dma(req); unsigned int cipher_mode = CHCR_SCMD_CIPHER_MODE_AES_CCM; unsigned int mac_mode = CHCR_SCMD_AUTH_MODE_CBCMAC; unsigned int rx_channel_id = reqctx->rxqidx / ctx->rxq_perchan; @@ -3036,7 +3036,7 @@ static struct sk_buff *create_aead_ccm_wr(struct aead_request *req, { struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct chcr_aead_ctx *aeadctx = AEAD_CTX(a_ctx(tfm)); - struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); + struct chcr_aead_reqctx *reqctx = aead_request_ctx_dma(req); struct sk_buff *skb = NULL; struct chcr_wr *chcr_req; struct cpl_rx_phys_dsgl *phys_cpl; @@ -3135,7 +3135,7 @@ static struct sk_buff *create_gcm_wr(struct aead_request *req, struct chcr_context *ctx = a_ctx(tfm); struct uld_ctx *u_ctx = ULD_CTX(ctx); struct chcr_aead_ctx *aeadctx = AEAD_CTX(ctx); - struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); + struct chcr_aead_reqctx *reqctx = aead_request_ctx_dma(req); struct sk_buff *skb = NULL; struct chcr_wr *chcr_req; struct cpl_rx_phys_dsgl *phys_cpl; @@ -3255,9 +3255,10 @@ static int chcr_aead_cra_init(struct crypto_aead *tfm) CRYPTO_ALG_ASYNC); if (IS_ERR(aeadctx->sw_cipher)) return PTR_ERR(aeadctx->sw_cipher); - crypto_aead_set_reqsize(tfm, max(sizeof(struct chcr_aead_reqctx), - sizeof(struct aead_request) + - crypto_aead_reqsize(aeadctx->sw_cipher))); + crypto_aead_set_reqsize_dma( + tfm, max(sizeof(struct chcr_aead_reqctx), + sizeof(struct aead_request) + + crypto_aead_reqsize(aeadctx->sw_cipher))); return chcr_device_init(a_ctx(tfm)); } @@ -3735,7 +3736,7 @@ static int chcr_aead_op(struct aead_request *req, create_wr_t create_wr_fn) { struct crypto_aead *tfm = crypto_aead_reqtfm(req); - struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); + struct chcr_aead_reqctx *reqctx = aead_request_ctx_dma(req); struct chcr_context *ctx = a_ctx(tfm); struct uld_ctx *u_ctx = ULD_CTX(ctx); struct sk_buff *skb; @@ -3785,7 +3786,7 @@ static int chcr_aead_op(struct aead_request *req, static int chcr_aead_encrypt(struct aead_request *req) { struct crypto_aead *tfm = crypto_aead_reqtfm(req); - struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); + struct chcr_aead_reqctx *reqctx = aead_request_ctx_dma(req); struct chcr_context *ctx = a_ctx(tfm); unsigned int cpu; @@ -3816,7 +3817,7 @@ static int chcr_aead_decrypt(struct aead_request *req) struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct chcr_context *ctx = a_ctx(tfm); struct chcr_aead_ctx *aeadctx = AEAD_CTX(ctx); - struct chcr_aead_reqctx *reqctx = aead_request_ctx(req); + struct chcr_aead_reqctx *reqctx = aead_request_ctx_dma(req); int size; unsigned int cpu; From patchwork Fri Dec 2 09:20:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13062453 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C4FFC4332F for ; Fri, 2 Dec 2022 09:21:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DDD5B6B007D; Fri, 2 Dec 2022 04:21:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BDF2D6B007B; Fri, 2 Dec 2022 04:21:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9BB6C6B007D; Fri, 2 Dec 2022 04:21:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 82F2D6B0078 for ; Fri, 2 Dec 2022 04:21:06 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 5B0008074F for ; Fri, 2 Dec 2022 09:21:06 +0000 (UTC) X-FDA: 80196822132.26.341FB1C Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by imf18.hostedemail.com (Postfix) with ESMTP id CC7BF1C0009 for ; Fri, 2 Dec 2022 09:21:05 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of herbert@gondor.apana.org.au designates 216.24.177.18 as permitted sender) smtp.mailfrom=herbert@gondor.apana.org.au; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669972866; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references:references; bh=2LysZrxhGxFuwretmDdZlVgYFtf71wnKmuJ6XoYp3rg=; b=dtsiLNNqgS367uZBiAhuWYzI6Gw35keQ+B8abA2+JSEhoN/HLP8X1ykDVFtbLzBcaP3wsK KY9Su7gM9+C+k8Xf9SFR48vINYRJoNgPy8oDan50eps2P/re4neC4lxJbLI1xuZ85ltX98 XgttXgCNjEq31R3S4HbLWtb1M4esGzY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669972866; a=rsa-sha256; cv=none; b=WPUY+iGP3GgTk3i9CoD16fop7GT7zjTCTW1qT/52H/MWeDPPEKMuwRW5+JYKlgjm0n57EW hXqBKJX005siUsLRIB5UAKMYIWBtyIArDXF47mC7QSn8BPNJj6GTSFl/ouUK1NA7DxYvMz EkAHhuzpSLNZ+Zt22fXO5lXEpnKaHRw= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of herbert@gondor.apana.org.au designates 216.24.177.18 as permitted sender) smtp.mailfrom=herbert@gondor.apana.org.au; dmarc=none Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1p12E3-003Aoj-Cn; Fri, 02 Dec 2022 17:20:56 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Fri, 02 Dec 2022 17:20:55 +0800 From: "Herbert Xu" Date: Fri, 02 Dec 2022 17:20:55 +0800 Subject: [PATCH 5/10] crypto: hisilicon/hpre - Set DMA alignment explicitly References: To: Catalin Marinas , Ard Biesheuvel , Will Deacon , Marc Zyngier , Arnd Bergmann , Greg Kroah-Hartman , Andrew Morton , Linus Torvalds , Linux Memory Management List , Linux ARM , Linux Kernel Mailing List , "David S. Miller" , Linux Crypto Mailing List Message-Id: X-Rspam-User: X-Rspamd-Queue-Id: CC7BF1C0009 X-Stat-Signature: zbgi6gh5dps7g5xgnbbo1tq7kar7nnnk X-Rspamd-Server: rspam01 X-Spamd-Result: default: False [-3.20 / 9.00]; BAYES_HAM(-3.00)[99.99%]; R_SPF_ALLOW(-0.20)[+mx]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; FROM_EQ_ENVFROM(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; R_DKIM_NA(0.00)[]; TO_DN_ALL(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; DMARC_NA(0.00)[apana.org.au]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-HE-Tag: 1669972865-826105 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This driver has been implicitly relying on kmalloc alignment to be sufficient for DMA. This may no longer be the case with upcoming arm64 changes. This patch changes it to explicitly request DMA alignment from the Crypto API. Signed-off-by: Herbert Xu --- drivers/crypto/hisilicon/hpre/hpre_crypto.c | 40 +++++++++++++++++----------- 1 file changed, 25 insertions(+), 15 deletions(-) diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c index 5f6d363c9435..8ede77310dc5 100644 --- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c +++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c @@ -147,6 +147,16 @@ struct hpre_asym_request { struct timespec64 req_time; }; +static inline unsigned int hpre_align_sz(void) +{ + return ((crypto_dma_align() - 1) | (HPRE_ALIGN_SZ - 1)) + 1; +} + +static inline unsigned int hpre_align_pd(void) +{ + return (hpre_align_sz() - 1) & ~(crypto_tfm_ctx_alignment() - 1); +} + static int hpre_alloc_req_id(struct hpre_ctx *ctx) { unsigned long flags; @@ -517,7 +527,7 @@ static int hpre_msg_request_set(struct hpre_ctx *ctx, void *req, bool is_rsa) } tmp = akcipher_request_ctx(akreq); - h_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ); + h_req = PTR_ALIGN(tmp, hpre_align_sz()); h_req->cb = hpre_rsa_cb; h_req->areq.rsa = akreq; msg = &h_req->req; @@ -531,7 +541,7 @@ static int hpre_msg_request_set(struct hpre_ctx *ctx, void *req, bool is_rsa) } tmp = kpp_request_ctx(kreq); - h_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ); + h_req = PTR_ALIGN(tmp, hpre_align_sz()); h_req->cb = hpre_dh_cb; h_req->areq.dh = kreq; msg = &h_req->req; @@ -582,7 +592,7 @@ static int hpre_dh_compute_value(struct kpp_request *req) struct crypto_kpp *tfm = crypto_kpp_reqtfm(req); struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); void *tmp = kpp_request_ctx(req); - struct hpre_asym_request *hpre_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ); + struct hpre_asym_request *hpre_req = PTR_ALIGN(tmp, hpre_align_sz()); struct hpre_sqe *msg = &hpre_req->req; int ret; @@ -740,7 +750,7 @@ static int hpre_dh_init_tfm(struct crypto_kpp *tfm) { struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); - kpp_set_reqsize(tfm, sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ); + kpp_set_reqsize(tfm, sizeof(struct hpre_asym_request) + hpre_align_pd()); return hpre_ctx_init(ctx, HPRE_V2_ALG_TYPE); } @@ -785,7 +795,7 @@ static int hpre_rsa_enc(struct akcipher_request *req) struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); struct hpre_ctx *ctx = akcipher_tfm_ctx(tfm); void *tmp = akcipher_request_ctx(req); - struct hpre_asym_request *hpre_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ); + struct hpre_asym_request *hpre_req = PTR_ALIGN(tmp, hpre_align_sz()); struct hpre_sqe *msg = &hpre_req->req; int ret; @@ -833,7 +843,7 @@ static int hpre_rsa_dec(struct akcipher_request *req) struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); struct hpre_ctx *ctx = akcipher_tfm_ctx(tfm); void *tmp = akcipher_request_ctx(req); - struct hpre_asym_request *hpre_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ); + struct hpre_asym_request *hpre_req = PTR_ALIGN(tmp, hpre_align_sz()); struct hpre_sqe *msg = &hpre_req->req; int ret; @@ -1168,7 +1178,7 @@ static int hpre_rsa_init_tfm(struct crypto_akcipher *tfm) } akcipher_set_reqsize(tfm, sizeof(struct hpre_asym_request) + - HPRE_ALIGN_SZ); + hpre_align_pd()); ret = hpre_ctx_init(ctx, HPRE_V2_ALG_TYPE); if (ret) @@ -1490,7 +1500,7 @@ static int hpre_ecdh_msg_request_set(struct hpre_ctx *ctx, } tmp = kpp_request_ctx(req); - h_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ); + h_req = PTR_ALIGN(tmp, hpre_align_sz()); h_req->cb = hpre_ecdh_cb; h_req->areq.ecdh = req; msg = &h_req->req; @@ -1571,7 +1581,7 @@ static int hpre_ecdh_compute_value(struct kpp_request *req) struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); struct device *dev = ctx->dev; void *tmp = kpp_request_ctx(req); - struct hpre_asym_request *hpre_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ); + struct hpre_asym_request *hpre_req = PTR_ALIGN(tmp, hpre_align_sz()); struct hpre_sqe *msg = &hpre_req->req; int ret; @@ -1622,7 +1632,7 @@ static int hpre_ecdh_nist_p192_init_tfm(struct crypto_kpp *tfm) ctx->curve_id = ECC_CURVE_NIST_P192; - kpp_set_reqsize(tfm, sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ); + kpp_set_reqsize(tfm, sizeof(struct hpre_asym_request) + hpre_align_pd()); return hpre_ctx_init(ctx, HPRE_V3_ECC_ALG_TYPE); } @@ -1633,7 +1643,7 @@ static int hpre_ecdh_nist_p256_init_tfm(struct crypto_kpp *tfm) ctx->curve_id = ECC_CURVE_NIST_P256; - kpp_set_reqsize(tfm, sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ); + kpp_set_reqsize(tfm, sizeof(struct hpre_asym_request) + hpre_align_pd()); return hpre_ctx_init(ctx, HPRE_V3_ECC_ALG_TYPE); } @@ -1644,7 +1654,7 @@ static int hpre_ecdh_nist_p384_init_tfm(struct crypto_kpp *tfm) ctx->curve_id = ECC_CURVE_NIST_P384; - kpp_set_reqsize(tfm, sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ); + kpp_set_reqsize(tfm, sizeof(struct hpre_asym_request) + hpre_align_pd()); return hpre_ctx_init(ctx, HPRE_V3_ECC_ALG_TYPE); } @@ -1802,7 +1812,7 @@ static int hpre_curve25519_msg_request_set(struct hpre_ctx *ctx, } tmp = kpp_request_ctx(req); - h_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ); + h_req = PTR_ALIGN(tmp, hpre_align_sz()); h_req->cb = hpre_curve25519_cb; h_req->areq.curve25519 = req; msg = &h_req->req; @@ -1923,7 +1933,7 @@ static int hpre_curve25519_compute_value(struct kpp_request *req) struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); struct device *dev = ctx->dev; void *tmp = kpp_request_ctx(req); - struct hpre_asym_request *hpre_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ); + struct hpre_asym_request *hpre_req = PTR_ALIGN(tmp, hpre_align_sz()); struct hpre_sqe *msg = &hpre_req->req; int ret; @@ -1972,7 +1982,7 @@ static int hpre_curve25519_init_tfm(struct crypto_kpp *tfm) { struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); - kpp_set_reqsize(tfm, sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ); + kpp_set_reqsize(tfm, sizeof(struct hpre_asym_request) + hpre_align_pd()); return hpre_ctx_init(ctx, HPRE_V3_ECC_ALG_TYPE); } From patchwork Fri Dec 2 09:20:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13062454 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8F80C4708D for ; Fri, 2 Dec 2022 09:21:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6FEAD6B0078; Fri, 2 Dec 2022 04:21:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6856B6B007B; Fri, 2 Dec 2022 04:21:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 525B26B007E; Fri, 2 Dec 2022 04:21:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 3D74C6B0078 for ; Fri, 2 Dec 2022 04:21:10 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 0D195160A54 for ; Fri, 2 Dec 2022 09:21:10 +0000 (UTC) X-FDA: 80196822300.25.85BB32E Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by imf17.hostedemail.com (Postfix) with ESMTP id 1A7C740006 for ; Fri, 2 Dec 2022 09:21:08 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf17.hostedemail.com: domain of herbert@gondor.apana.org.au designates 216.24.177.18 as permitted sender) smtp.mailfrom=herbert@gondor.apana.org.au ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669972869; a=rsa-sha256; cv=none; b=wmVWkcNVMeee9YR4vlHKPKP9yIq6UYxrRYKsQqJvO7VEsSxacPzOTT4GGRlS6avEU9G/ko sGVTsHsCV7QH5B+oABK/iNKe4EMgcT4qKzsicukpzYwGYPFbGKcxqL8k8Vb1/syo85ksKR +4OWhvzsyoI0wMsZ0HPF9u72TCkbuY4= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf17.hostedemail.com: domain of herbert@gondor.apana.org.au designates 216.24.177.18 as permitted sender) smtp.mailfrom=herbert@gondor.apana.org.au ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669972869; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references:references; bh=61fl36z1NM26UrrWdpaVcXEwZXNeuD8WaDncOqQg6eM=; b=wamQohOZ3fL+XE4KlbjIPPkD0u+DgGorHkKz/wDfzlBn3m0TJwNpd8f9b9Q/SPcqZ/b9fQ 1+nf/O8Ymh6FBrfzBQ//l5RS12En5t4fcMpNWZ9U6WQuKgjLtiNWJnIBE0zfDnwDWZ/JDb oyUAijOJhrU9h5XUQjO2WYiYFArcmqk= Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1p12E5-003Ap0-F2; Fri, 02 Dec 2022 17:20:58 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Fri, 02 Dec 2022 17:20:57 +0800 From: "Herbert Xu" Date: Fri, 02 Dec 2022 17:20:57 +0800 Subject: [PATCH 6/10] crypto: safexcel - Set DMA alignment explicitly References: To: Catalin Marinas , Ard Biesheuvel , Will Deacon , Marc Zyngier , Arnd Bergmann , Greg Kroah-Hartman , Andrew Morton , Linus Torvalds , Linux Memory Management List , Linux ARM , Linux Kernel Mailing List , "David S. Miller" , Linux Crypto Mailing List Message-Id: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 1A7C740006 X-Stat-Signature: 1jfrw4u5mpsriedwdi6s6wfga6g4ms6g X-Spamd-Result: default: False [-3.20 / 9.00]; BAYES_HAM(-3.00)[99.99%]; R_SPF_ALLOW(-0.20)[+mx]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; FROM_EQ_ENVFROM(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; R_DKIM_NA(0.00)[]; TO_DN_ALL(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; DMARC_NA(0.00)[apana.org.au]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-Rspam-User: X-HE-Tag: 1669972868-578147 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This driver has been implicitly relying on kmalloc alignment to be sufficient for DMA. This may no longer be the case with upcoming arm64 changes. This patch changes it to explicitly request DMA alignment from the Crypto API. Signed-off-by: Herbert Xu --- drivers/crypto/inside-secure/safexcel_hash.c | 99 +++++++++++++-------------- 1 file changed, 50 insertions(+), 49 deletions(-) diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c index 103fc551d2af..ca46328472d4 100644 --- a/drivers/crypto/inside-secure/safexcel_hash.c +++ b/drivers/crypto/inside-secure/safexcel_hash.c @@ -231,7 +231,7 @@ static int safexcel_handle_req_result(struct safexcel_crypto_priv *priv, struct safexcel_result_desc *rdesc; struct ahash_request *areq = ahash_request_cast(async); struct crypto_ahash *ahash = crypto_ahash_reqtfm(areq); - struct safexcel_ahash_req *sreq = ahash_request_ctx(areq); + struct safexcel_ahash_req *sreq = ahash_request_ctx_dma(areq); struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(ahash); u64 cache_len; @@ -312,7 +312,7 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring, int *commands, int *results) { struct ahash_request *areq = ahash_request_cast(async); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); struct safexcel_crypto_priv *priv = ctx->base.priv; struct safexcel_command_desc *cdesc, *first_cdesc = NULL; @@ -569,7 +569,7 @@ static int safexcel_handle_result(struct safexcel_crypto_priv *priv, int ring, bool *should_complete, int *ret) { struct ahash_request *areq = ahash_request_cast(async); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); int err; BUG_ON(!(priv->flags & EIP197_TRC_CACHE) && req->needs_inv); @@ -608,7 +608,7 @@ static int safexcel_ahash_send(struct crypto_async_request *async, int ring, int *commands, int *results) { struct ahash_request *areq = ahash_request_cast(async); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); int ret; if (req->needs_inv) @@ -624,7 +624,7 @@ static int safexcel_ahash_exit_inv(struct crypto_tfm *tfm) struct safexcel_ahash_ctx *ctx = crypto_tfm_ctx(tfm); struct safexcel_crypto_priv *priv = ctx->base.priv; EIP197_REQUEST_ON_STACK(req, ahash, EIP197_AHASH_REQ_SIZE); - struct safexcel_ahash_req *rctx = ahash_request_ctx(req); + struct safexcel_ahash_req *rctx = ahash_request_ctx_dma(req); struct safexcel_inv_result result = {}; int ring = ctx->base.ring; @@ -663,7 +663,7 @@ static int safexcel_ahash_exit_inv(struct crypto_tfm *tfm) */ static int safexcel_ahash_cache(struct ahash_request *areq) { - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); u64 cache_len; /* cache_len: everything accepted by the driver but not sent yet, @@ -689,7 +689,7 @@ static int safexcel_ahash_cache(struct ahash_request *areq) static int safexcel_ahash_enqueue(struct ahash_request *areq) { struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); struct safexcel_crypto_priv *priv = ctx->base.priv; int ret, ring; @@ -741,7 +741,7 @@ static int safexcel_ahash_enqueue(struct ahash_request *areq) static int safexcel_ahash_update(struct ahash_request *areq) { - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); int ret; /* If the request is 0 length, do nothing */ @@ -766,7 +766,7 @@ static int safexcel_ahash_update(struct ahash_request *areq) static int safexcel_ahash_final(struct ahash_request *areq) { - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); req->finish = true; @@ -870,7 +870,7 @@ static int safexcel_ahash_final(struct ahash_request *areq) static int safexcel_ahash_finup(struct ahash_request *areq) { - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); req->finish = true; @@ -880,7 +880,7 @@ static int safexcel_ahash_finup(struct ahash_request *areq) static int safexcel_ahash_export(struct ahash_request *areq, void *out) { - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); struct safexcel_ahash_export_state *export = out; export->len = req->len; @@ -896,7 +896,7 @@ static int safexcel_ahash_export(struct ahash_request *areq, void *out) static int safexcel_ahash_import(struct ahash_request *areq, const void *in) { - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); const struct safexcel_ahash_export_state *export = in; int ret; @@ -927,15 +927,15 @@ static int safexcel_ahash_cra_init(struct crypto_tfm *tfm) ctx->base.handle_result = safexcel_handle_result; ctx->fb_do_setkey = false; - crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm), - sizeof(struct safexcel_ahash_req)); + crypto_ahash_set_reqsize_dma(__crypto_ahash_cast(tfm), + sizeof(struct safexcel_ahash_req)); return 0; } static int safexcel_sha1_init(struct ahash_request *areq) { struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -1012,7 +1012,7 @@ struct safexcel_alg_template safexcel_alg_sha1 = { static int safexcel_hmac_sha1_init(struct ahash_request *areq) { struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -1124,7 +1124,7 @@ static int safexcel_hmac_init_iv(struct ahash_request *areq, if (ret) return ret; - req = ahash_request_ctx(areq); + req = ahash_request_ctx_dma(areq); req->hmac = true; req->last_req = true; @@ -1264,7 +1264,7 @@ struct safexcel_alg_template safexcel_alg_hmac_sha1 = { static int safexcel_sha256_init(struct ahash_request *areq) { struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -1321,7 +1321,7 @@ struct safexcel_alg_template safexcel_alg_sha256 = { static int safexcel_sha224_init(struct ahash_request *areq) { struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -1385,7 +1385,7 @@ static int safexcel_hmac_sha224_setkey(struct crypto_ahash *tfm, const u8 *key, static int safexcel_hmac_sha224_init(struct ahash_request *areq) { struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -1457,7 +1457,7 @@ static int safexcel_hmac_sha256_setkey(struct crypto_ahash *tfm, const u8 *key, static int safexcel_hmac_sha256_init(struct ahash_request *areq) { struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -1522,7 +1522,7 @@ struct safexcel_alg_template safexcel_alg_hmac_sha256 = { static int safexcel_sha512_init(struct ahash_request *areq) { struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -1579,7 +1579,7 @@ struct safexcel_alg_template safexcel_alg_sha512 = { static int safexcel_sha384_init(struct ahash_request *areq) { struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -1643,7 +1643,7 @@ static int safexcel_hmac_sha512_setkey(struct crypto_ahash *tfm, const u8 *key, static int safexcel_hmac_sha512_init(struct ahash_request *areq) { struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -1715,7 +1715,7 @@ static int safexcel_hmac_sha384_setkey(struct crypto_ahash *tfm, const u8 *key, static int safexcel_hmac_sha384_init(struct ahash_request *areq) { struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -1780,7 +1780,7 @@ struct safexcel_alg_template safexcel_alg_hmac_sha384 = { static int safexcel_md5_init(struct ahash_request *areq) { struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -1837,7 +1837,7 @@ struct safexcel_alg_template safexcel_alg_md5 = { static int safexcel_hmac_md5_init(struct ahash_request *areq) { struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -1920,7 +1920,7 @@ static int safexcel_crc32_cra_init(struct crypto_tfm *tfm) static int safexcel_crc32_init(struct ahash_request *areq) { struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -1992,7 +1992,7 @@ struct safexcel_alg_template safexcel_alg_crc32 = { static int safexcel_cbcmac_init(struct ahash_request *areq) { struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -2252,7 +2252,7 @@ struct safexcel_alg_template safexcel_alg_cmac = { static int safexcel_sm3_init(struct ahash_request *areq) { struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -2316,7 +2316,7 @@ static int safexcel_hmac_sm3_setkey(struct crypto_ahash *tfm, const u8 *key, static int safexcel_hmac_sm3_init(struct ahash_request *areq) { struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -2382,7 +2382,7 @@ static int safexcel_sha3_224_init(struct ahash_request *areq) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(tfm); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -2400,7 +2400,7 @@ static int safexcel_sha3_fbcheck(struct ahash_request *req) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(tfm); - struct ahash_request *subreq = ahash_request_ctx(req); + struct ahash_request *subreq = ahash_request_ctx_dma(req); int ret = 0; if (ctx->do_fallback) { @@ -2437,7 +2437,7 @@ static int safexcel_sha3_update(struct ahash_request *req) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(tfm); - struct ahash_request *subreq = ahash_request_ctx(req); + struct ahash_request *subreq = ahash_request_ctx_dma(req); ctx->do_fallback = true; return safexcel_sha3_fbcheck(req) ?: crypto_ahash_update(subreq); @@ -2447,7 +2447,7 @@ static int safexcel_sha3_final(struct ahash_request *req) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(tfm); - struct ahash_request *subreq = ahash_request_ctx(req); + struct ahash_request *subreq = ahash_request_ctx_dma(req); ctx->do_fallback = true; return safexcel_sha3_fbcheck(req) ?: crypto_ahash_final(subreq); @@ -2457,7 +2457,7 @@ static int safexcel_sha3_finup(struct ahash_request *req) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(tfm); - struct ahash_request *subreq = ahash_request_ctx(req); + struct ahash_request *subreq = ahash_request_ctx_dma(req); ctx->do_fallback |= !req->nbytes; if (ctx->do_fallback) @@ -2472,7 +2472,7 @@ static int safexcel_sha3_digest_fallback(struct ahash_request *req) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(tfm); - struct ahash_request *subreq = ahash_request_ctx(req); + struct ahash_request *subreq = ahash_request_ctx_dma(req); ctx->do_fallback = true; ctx->fb_init_done = false; @@ -2492,7 +2492,7 @@ static int safexcel_sha3_export(struct ahash_request *req, void *out) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(tfm); - struct ahash_request *subreq = ahash_request_ctx(req); + struct ahash_request *subreq = ahash_request_ctx_dma(req); ctx->do_fallback = true; return safexcel_sha3_fbcheck(req) ?: crypto_ahash_export(subreq, out); @@ -2502,7 +2502,7 @@ static int safexcel_sha3_import(struct ahash_request *req, const void *in) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(tfm); - struct ahash_request *subreq = ahash_request_ctx(req); + struct ahash_request *subreq = ahash_request_ctx_dma(req); ctx->do_fallback = true; return safexcel_sha3_fbcheck(req) ?: crypto_ahash_import(subreq, in); @@ -2526,9 +2526,10 @@ static int safexcel_sha3_cra_init(struct crypto_tfm *tfm) /* Update statesize from fallback algorithm! */ crypto_hash_alg_common(ahash)->statesize = crypto_ahash_statesize(ctx->fback); - crypto_ahash_set_reqsize(ahash, max(sizeof(struct safexcel_ahash_req), - sizeof(struct ahash_request) + - crypto_ahash_reqsize(ctx->fback))); + crypto_ahash_set_reqsize_dma( + ahash, max(sizeof(struct safexcel_ahash_req), + sizeof(struct ahash_request) + + crypto_ahash_reqsize(ctx->fback))); return 0; } @@ -2575,7 +2576,7 @@ static int safexcel_sha3_256_init(struct ahash_request *areq) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(tfm); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -2633,7 +2634,7 @@ static int safexcel_sha3_384_init(struct ahash_request *areq) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(tfm); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -2691,7 +2692,7 @@ static int safexcel_sha3_512_init(struct ahash_request *areq) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(tfm); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -2841,7 +2842,7 @@ static int safexcel_hmac_sha3_224_init(struct ahash_request *areq) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(tfm); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -2912,7 +2913,7 @@ static int safexcel_hmac_sha3_256_init(struct ahash_request *areq) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(tfm); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -2983,7 +2984,7 @@ static int safexcel_hmac_sha3_384_init(struct ahash_request *areq) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(tfm); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); @@ -3054,7 +3055,7 @@ static int safexcel_hmac_sha3_512_init(struct ahash_request *areq) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq); struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(tfm); - struct safexcel_ahash_req *req = ahash_request_ctx(areq); + struct safexcel_ahash_req *req = ahash_request_ctx_dma(areq); memset(req, 0, sizeof(*req)); From patchwork Fri Dec 2 09:20:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13062455 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2988FC4708E for ; Fri, 2 Dec 2022 09:21:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CF1D26B007B; Fri, 2 Dec 2022 04:21:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C06936B007E; Fri, 2 Dec 2022 04:21:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A2EE16B0080; Fri, 2 Dec 2022 04:21:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 838DD6B007E for ; Fri, 2 Dec 2022 04:21:10 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5D5F7160A54 for ; Fri, 2 Dec 2022 09:21:10 +0000 (UTC) X-FDA: 80196822300.22.F2CD922 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by imf13.hostedemail.com (Postfix) with ESMTP id D388720003 for ; Fri, 2 Dec 2022 09:21:09 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf13.hostedemail.com: domain of herbert@gondor.apana.org.au designates 216.24.177.18 as permitted sender) smtp.mailfrom=herbert@gondor.apana.org.au ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669972870; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references:references; bh=Y4+UDrp9G9vvAKmhx1hDu06hRWbE1T8+CAfKmzXPVCs=; b=lyjMnxTRGaPt6Xl3bch0VfLII8Z1exMO6U73qBAPnhMpomXvItaUcYieliK/gIwssVCC9r rLkf46eJsWWL+6VYyAdedx26qdwWTn6uxAX7aRUH8dUXIqjPs993LIVpvI9F/xBdILafUN bKolZv701EhzVhG9Xi7881no5+fI86U= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf13.hostedemail.com: domain of herbert@gondor.apana.org.au designates 216.24.177.18 as permitted sender) smtp.mailfrom=herbert@gondor.apana.org.au ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669972870; a=rsa-sha256; cv=none; b=uyr9aDisKx0Bhhp6aCEHpMkcQ+9ZyFUg+wvVSszFG8iDbxwhg6f+JBtItSdsthwoUQLWoZ vHiFTGEb+n8j+aU9sty5+mpI2jYTS8T829OxLh4fEhQzQvXzvPVV+AfjXbOJL84onWgMmH aeIRWtqsaC8ExQsgDRWbo8lSaGjjfW8= Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1p12E7-003ApG-IG; Fri, 02 Dec 2022 17:21:00 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Fri, 02 Dec 2022 17:20:59 +0800 From: "Herbert Xu" Date: Fri, 02 Dec 2022 17:20:59 +0800 Subject: [PATCH 7/10] crypto: keembay - Set DMA alignment explicitly References: To: Catalin Marinas , Ard Biesheuvel , Will Deacon , Marc Zyngier , Arnd Bergmann , Greg Kroah-Hartman , Andrew Morton , Linus Torvalds , Linux Memory Management List , Linux ARM , Linux Kernel Mailing List , "David S. Miller" , Linux Crypto Mailing List Message-Id: X-Spamd-Result: default: False [-3.20 / 9.00]; BAYES_HAM(-3.00)[99.99%]; R_SPF_ALLOW(-0.20)[+mx]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; FROM_EQ_ENVFROM(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; R_DKIM_NA(0.00)[]; TO_DN_ALL(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; DMARC_NA(0.00)[apana.org.au]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-Stat-Signature: 6oytg3nofby5a8d43skanpmus986f4d8 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: D388720003 X-Rspam-User: X-HE-Tag: 1669972869-955497 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This driver has been implicitly relying on kmalloc alignment to be sufficient for DMA. This may no longer be the case with upcoming arm64 changes. This patch changes it to explicitly request DMA alignment from the Crypto API. Signed-off-by: Herbert Xu --- drivers/crypto/keembay/keembay-ocs-hcu-core.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/drivers/crypto/keembay/keembay-ocs-hcu-core.c b/drivers/crypto/keembay/keembay-ocs-hcu-core.c index 0379dbf32a4c..d4bcbed1f546 100644 --- a/drivers/crypto/keembay/keembay-ocs-hcu-core.c +++ b/drivers/crypto/keembay/keembay-ocs-hcu-core.c @@ -226,7 +226,7 @@ static void kmb_ocs_hcu_dma_cleanup(struct ahash_request *req, */ static int kmb_ocs_dma_prepare(struct ahash_request *req) { - struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + struct ocs_hcu_rctx *rctx = ahash_request_ctx_dma(req); struct device *dev = rctx->hcu_dev->dev; unsigned int remainder = 0; unsigned int total; @@ -356,7 +356,7 @@ static int kmb_ocs_dma_prepare(struct ahash_request *req) static void kmb_ocs_hcu_secure_cleanup(struct ahash_request *req) { - struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + struct ocs_hcu_rctx *rctx = ahash_request_ctx_dma(req); /* Clear buffer of any data. */ memzero_explicit(rctx->buffer, sizeof(rctx->buffer)); @@ -374,7 +374,7 @@ static int kmb_ocs_hcu_handle_queue(struct ahash_request *req) static int prepare_ipad(struct ahash_request *req) { - struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + struct ocs_hcu_rctx *rctx = ahash_request_ctx_dma(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct ocs_hcu_ctx *ctx = crypto_ahash_ctx(tfm); int i; @@ -414,7 +414,7 @@ static int kmb_ocs_hcu_do_one_request(struct crypto_engine *engine, void *areq) base); struct ocs_hcu_dev *hcu_dev = kmb_ocs_hcu_find_dev(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + struct ocs_hcu_rctx *rctx = ahash_request_ctx_dma(req); struct ocs_hcu_ctx *tctx = crypto_ahash_ctx(tfm); int rc; int i; @@ -561,7 +561,7 @@ static int kmb_ocs_hcu_do_one_request(struct crypto_engine *engine, void *areq) static int kmb_ocs_hcu_init(struct ahash_request *req) { struct ocs_hcu_dev *hcu_dev = kmb_ocs_hcu_find_dev(req); - struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + struct ocs_hcu_rctx *rctx = ahash_request_ctx_dma(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct ocs_hcu_ctx *ctx = crypto_ahash_ctx(tfm); @@ -614,7 +614,7 @@ static int kmb_ocs_hcu_init(struct ahash_request *req) static int kmb_ocs_hcu_update(struct ahash_request *req) { - struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + struct ocs_hcu_rctx *rctx = ahash_request_ctx_dma(req); int rc; if (!req->nbytes) @@ -650,7 +650,7 @@ static int kmb_ocs_hcu_update(struct ahash_request *req) /* Common logic for kmb_ocs_hcu_final() and kmb_ocs_hcu_finup(). */ static int kmb_ocs_hcu_fin_common(struct ahash_request *req) { - struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + struct ocs_hcu_rctx *rctx = ahash_request_ctx_dma(req); struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct ocs_hcu_ctx *ctx = crypto_ahash_ctx(tfm); int rc; @@ -687,7 +687,7 @@ static int kmb_ocs_hcu_fin_common(struct ahash_request *req) static int kmb_ocs_hcu_final(struct ahash_request *req) { - struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + struct ocs_hcu_rctx *rctx = ahash_request_ctx_dma(req); rctx->sg_data_total = 0; rctx->sg_data_offset = 0; @@ -698,7 +698,7 @@ static int kmb_ocs_hcu_final(struct ahash_request *req) static int kmb_ocs_hcu_finup(struct ahash_request *req) { - struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + struct ocs_hcu_rctx *rctx = ahash_request_ctx_dma(req); rctx->sg_data_total = req->nbytes; rctx->sg_data_offset = 0; @@ -726,7 +726,7 @@ static int kmb_ocs_hcu_digest(struct ahash_request *req) static int kmb_ocs_hcu_export(struct ahash_request *req, void *out) { - struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + struct ocs_hcu_rctx *rctx = ahash_request_ctx_dma(req); /* Intermediate data is always stored and applied per request. */ memcpy(out, rctx, sizeof(*rctx)); @@ -736,7 +736,7 @@ static int kmb_ocs_hcu_export(struct ahash_request *req, void *out) static int kmb_ocs_hcu_import(struct ahash_request *req, const void *in) { - struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + struct ocs_hcu_rctx *rctx = ahash_request_ctx_dma(req); /* Intermediate data is always stored and applied per request. */ memcpy(rctx, in, sizeof(*rctx)); @@ -822,8 +822,8 @@ static int kmb_ocs_hcu_setkey(struct crypto_ahash *tfm, const u8 *key, /* Set request size and initialize tfm context. */ static void __cra_init(struct crypto_tfm *tfm, struct ocs_hcu_ctx *ctx) { - crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm), - sizeof(struct ocs_hcu_rctx)); + crypto_ahash_set_reqsize_dma(__crypto_ahash_cast(tfm), + sizeof(struct ocs_hcu_rctx)); /* Init context to 0. */ memzero_explicit(ctx, sizeof(*ctx)); From patchwork Fri Dec 2 09:21:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13062456 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1A09C4332F for ; Fri, 2 Dec 2022 09:21:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8F8A66B007E; Fri, 2 Dec 2022 04:21:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 881876B0080; Fri, 2 Dec 2022 04:21:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6D4E76B0081; Fri, 2 Dec 2022 04:21:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 587BF6B007E for ; Fri, 2 Dec 2022 04:21:13 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 300CA140972 for ; Fri, 2 Dec 2022 09:21:13 +0000 (UTC) X-FDA: 80196822426.28.DF540A7 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by imf18.hostedemail.com (Postfix) with ESMTP id 4004C1C000B for ; Fri, 2 Dec 2022 09:21:11 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of herbert@gondor.apana.org.au designates 216.24.177.18 as permitted sender) smtp.mailfrom=herbert@gondor.apana.org.au; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669972872; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references:references; bh=mB98THK6QfTrvJBGw4umV+TsizcO2xtJMWbZSTYSbyw=; b=NGqcQ0TnmJ6MIODk/sk2/1Kl8aAEWucuvT3opTM4/NoX+/5gUAHmtuU90BptvPxYDL/GZ2 BQB42W/Zzv8Bp7t2k3rFH3jUqus7QvrgA6Fy0XfVQVY3qZzBzFGA7Pr0EAetba0AfCBCfU vjPkr6mJUsmdsnHE3xI6a4D7MTaoqNg= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of herbert@gondor.apana.org.au designates 216.24.177.18 as permitted sender) smtp.mailfrom=herbert@gondor.apana.org.au; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669972872; a=rsa-sha256; cv=none; b=SUaJ72xZHzzINRviL9NNti5qbSUABqW7kqTgoozGYiFN6lpt9QElLKgo92Y9ALloyST/6F oAuM9GsBgrQaFwOhjEbOgtGLIbdX9vdO53thPk8nWQdn+bPjCSkgO8pgsgElZ0VoM90pHp qUrN4u9DV/KjLmcDugb6XkQEPyuGYfg= Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1p12E9-003ApZ-KZ; Fri, 02 Dec 2022 17:21:02 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Fri, 02 Dec 2022 17:21:01 +0800 From: "Herbert Xu" Date: Fri, 02 Dec 2022 17:21:01 +0800 Subject: [PATCH 8/10] crypto: octeontx - Set DMA alignment explicitly References: To: Catalin Marinas , Ard Biesheuvel , Will Deacon , Marc Zyngier , Arnd Bergmann , Greg Kroah-Hartman , Andrew Morton , Linus Torvalds , Linux Memory Management List , Linux ARM , Linux Kernel Mailing List , "David S. Miller" , Linux Crypto Mailing List Message-Id: X-Spamd-Result: default: False [-3.20 / 9.00]; BAYES_HAM(-3.00)[99.99%]; R_SPF_ALLOW(-0.20)[+mx]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; FROM_EQ_ENVFROM(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; R_DKIM_NA(0.00)[]; TO_DN_ALL(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; DMARC_NA(0.00)[apana.org.au]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-Rspamd-Queue-Id: 4004C1C000B X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: dj7wz47attrjeegqpbz3nhcagrsn7x5t X-HE-Tag: 1669972871-378624 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This driver has been implicitly relying on kmalloc alignment to be sufficient for DMA. This may no longer be the case with upcoming arm64 changes. This patch changes it to explicitly request DMA alignment from the Crypto API. Signed-off-by: Herbert Xu --- drivers/crypto/marvell/octeontx/otx_cptvf_algs.c | 69 +++++++++++------------ 1 file changed, 35 insertions(+), 34 deletions(-) diff --git a/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c b/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c index 01c48ddc4eeb..80ba77c793a7 100644 --- a/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c +++ b/drivers/crypto/marvell/octeontx/otx_cptvf_algs.c @@ -103,7 +103,7 @@ static inline int validate_hmac_cipher_null(struct otx_cpt_req_info *cpt_req) req = container_of(cpt_req->areq, struct aead_request, base); tfm = crypto_aead_reqtfm(req); - rctx = aead_request_ctx(req); + rctx = aead_request_ctx_dma(req); if (memcmp(rctx->fctx.hmac.s.hmac_calc, rctx->fctx.hmac.s.hmac_recv, crypto_aead_authsize(tfm)) != 0) @@ -155,7 +155,7 @@ static void output_iv_copyback(struct crypto_async_request *areq) ctx = crypto_skcipher_ctx(stfm); if (ctx->cipher_type == OTX_CPT_AES_CBC || ctx->cipher_type == OTX_CPT_DES3_CBC) { - rctx = skcipher_request_ctx(sreq); + rctx = skcipher_request_ctx_dma(sreq); req_info = &rctx->cpt_req; ivsize = crypto_skcipher_ivsize(stfm); start = sreq->cryptlen - ivsize; @@ -233,7 +233,7 @@ static inline u32 create_ctx_hdr(struct skcipher_request *req, u32 enc, u32 *argcnt) { struct crypto_skcipher *stfm = crypto_skcipher_reqtfm(req); - struct otx_cpt_req_ctx *rctx = skcipher_request_ctx(req); + struct otx_cpt_req_ctx *rctx = skcipher_request_ctx_dma(req); struct otx_cpt_req_info *req_info = &rctx->cpt_req; struct crypto_tfm *tfm = crypto_skcipher_tfm(stfm); struct otx_cpt_enc_ctx *ctx = crypto_tfm_ctx(tfm); @@ -303,7 +303,7 @@ static inline u32 create_ctx_hdr(struct skcipher_request *req, u32 enc, static inline u32 create_input_list(struct skcipher_request *req, u32 enc, u32 enc_iv_len) { - struct otx_cpt_req_ctx *rctx = skcipher_request_ctx(req); + struct otx_cpt_req_ctx *rctx = skcipher_request_ctx_dma(req); struct otx_cpt_req_info *req_info = &rctx->cpt_req; u32 argcnt = 0; int ret; @@ -321,7 +321,7 @@ static inline u32 create_input_list(struct skcipher_request *req, u32 enc, static inline void create_output_list(struct skcipher_request *req, u32 enc_iv_len) { - struct otx_cpt_req_ctx *rctx = skcipher_request_ctx(req); + struct otx_cpt_req_ctx *rctx = skcipher_request_ctx_dma(req); struct otx_cpt_req_info *req_info = &rctx->cpt_req; u32 argcnt = 0; @@ -340,7 +340,7 @@ static inline void create_output_list(struct skcipher_request *req, static inline int cpt_enc_dec(struct skcipher_request *req, u32 enc) { struct crypto_skcipher *stfm = crypto_skcipher_reqtfm(req); - struct otx_cpt_req_ctx *rctx = skcipher_request_ctx(req); + struct otx_cpt_req_ctx *rctx = skcipher_request_ctx_dma(req); struct otx_cpt_req_info *req_info = &rctx->cpt_req; u32 enc_iv_len = crypto_skcipher_ivsize(stfm); struct pci_dev *pdev; @@ -501,15 +501,16 @@ static int otx_cpt_enc_dec_init(struct crypto_skcipher *tfm) * allocated since the cryptd daemon uses * this memory for request_ctx information */ - crypto_skcipher_set_reqsize(tfm, sizeof(struct otx_cpt_req_ctx) + - sizeof(struct skcipher_request)); + crypto_skcipher_set_reqsize_dma( + tfm, sizeof(struct otx_cpt_req_ctx) + + sizeof(struct skcipher_request)); return 0; } static int cpt_aead_init(struct crypto_aead *tfm, u8 cipher_type, u8 mac_type) { - struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(tfm); ctx->cipher_type = cipher_type; ctx->mac_type = mac_type; @@ -551,7 +552,7 @@ static int cpt_aead_init(struct crypto_aead *tfm, u8 cipher_type, u8 mac_type) } } - crypto_aead_set_reqsize(tfm, sizeof(struct otx_cpt_req_ctx)); + crypto_aead_set_reqsize_dma(tfm, sizeof(struct otx_cpt_req_ctx)); return 0; } @@ -603,7 +604,7 @@ static int otx_cpt_aead_gcm_aes_init(struct crypto_aead *tfm) static void otx_cpt_aead_exit(struct crypto_aead *tfm) { - struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(tfm); kfree(ctx->ipad); kfree(ctx->opad); @@ -619,7 +620,7 @@ static void otx_cpt_aead_exit(struct crypto_aead *tfm) static int otx_cpt_aead_set_authsize(struct crypto_aead *tfm, unsigned int authsize) { - struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(tfm); switch (ctx->mac_type) { case OTX_CPT_SHA1: @@ -739,7 +740,7 @@ static int copy_pad(u8 mac_type, u8 *out_pad, u8 *in_pad) static int aead_hmac_init(struct crypto_aead *cipher) { - struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx(cipher); + struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(cipher); int state_size = crypto_shash_statesize(ctx->hashalg); int ds = crypto_shash_digestsize(ctx->hashalg); int bs = crypto_shash_blocksize(ctx->hashalg); @@ -837,7 +838,7 @@ static int otx_cpt_aead_cbc_aes_sha_setkey(struct crypto_aead *cipher, const unsigned char *key, unsigned int keylen) { - struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx(cipher); + struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(cipher); struct crypto_authenc_key_param *param; int enckeylen = 0, authkeylen = 0; struct rtattr *rta = (void *)key; @@ -896,7 +897,7 @@ static int otx_cpt_aead_ecb_null_sha_setkey(struct crypto_aead *cipher, const unsigned char *key, unsigned int keylen) { - struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx(cipher); + struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(cipher); struct crypto_authenc_key_param *param; struct rtattr *rta = (void *)key; int enckeylen = 0; @@ -932,7 +933,7 @@ static int otx_cpt_aead_gcm_aes_setkey(struct crypto_aead *cipher, const unsigned char *key, unsigned int keylen) { - struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx(cipher); + struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(cipher); /* * For aes gcm we expect to get encryption key (16, 24, 32 bytes) @@ -965,9 +966,9 @@ static int otx_cpt_aead_gcm_aes_setkey(struct crypto_aead *cipher, static inline u32 create_aead_ctx_hdr(struct aead_request *req, u32 enc, u32 *argcnt) { - struct otx_cpt_req_ctx *rctx = aead_request_ctx(req); + struct otx_cpt_req_ctx *rctx = aead_request_ctx_dma(req); struct crypto_aead *tfm = crypto_aead_reqtfm(req); - struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(tfm); struct otx_cpt_req_info *req_info = &rctx->cpt_req; struct otx_cpt_fc_ctx *fctx = &rctx->fctx; int mac_len = crypto_aead_authsize(tfm); @@ -1050,9 +1051,9 @@ static inline u32 create_aead_ctx_hdr(struct aead_request *req, u32 enc, static inline u32 create_hmac_ctx_hdr(struct aead_request *req, u32 *argcnt, u32 enc) { - struct otx_cpt_req_ctx *rctx = aead_request_ctx(req); + struct otx_cpt_req_ctx *rctx = aead_request_ctx_dma(req); struct crypto_aead *tfm = crypto_aead_reqtfm(req); - struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct otx_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(tfm); struct otx_cpt_req_info *req_info = &rctx->cpt_req; req_info->ctrl.s.dma_mode = OTX_CPT_DMA_GATHER_SCATTER; @@ -1076,7 +1077,7 @@ static inline u32 create_hmac_ctx_hdr(struct aead_request *req, u32 *argcnt, static inline u32 create_aead_input_list(struct aead_request *req, u32 enc) { - struct otx_cpt_req_ctx *rctx = aead_request_ctx(req); + struct otx_cpt_req_ctx *rctx = aead_request_ctx_dma(req); struct otx_cpt_req_info *req_info = &rctx->cpt_req; u32 inputlen = req->cryptlen + req->assoclen; u32 status, argcnt = 0; @@ -1093,7 +1094,7 @@ static inline u32 create_aead_input_list(struct aead_request *req, u32 enc) static inline u32 create_aead_output_list(struct aead_request *req, u32 enc, u32 mac_len) { - struct otx_cpt_req_ctx *rctx = aead_request_ctx(req); + struct otx_cpt_req_ctx *rctx = aead_request_ctx_dma(req); struct otx_cpt_req_info *req_info = &rctx->cpt_req; u32 argcnt = 0, outputlen = 0; @@ -1111,7 +1112,7 @@ static inline u32 create_aead_output_list(struct aead_request *req, u32 enc, static inline u32 create_aead_null_input_list(struct aead_request *req, u32 enc, u32 mac_len) { - struct otx_cpt_req_ctx *rctx = aead_request_ctx(req); + struct otx_cpt_req_ctx *rctx = aead_request_ctx_dma(req); struct otx_cpt_req_info *req_info = &rctx->cpt_req; u32 inputlen, argcnt = 0; @@ -1130,7 +1131,7 @@ static inline u32 create_aead_null_input_list(struct aead_request *req, static inline u32 create_aead_null_output_list(struct aead_request *req, u32 enc, u32 mac_len) { - struct otx_cpt_req_ctx *rctx = aead_request_ctx(req); + struct otx_cpt_req_ctx *rctx = aead_request_ctx_dma(req); struct otx_cpt_req_info *req_info = &rctx->cpt_req; struct scatterlist *dst; u8 *ptr = NULL; @@ -1217,7 +1218,7 @@ static inline u32 create_aead_null_output_list(struct aead_request *req, static u32 cpt_aead_enc_dec(struct aead_request *req, u8 reg_type, u8 enc) { - struct otx_cpt_req_ctx *rctx = aead_request_ctx(req); + struct otx_cpt_req_ctx *rctx = aead_request_ctx_dma(req); struct otx_cpt_req_info *req_info = &rctx->cpt_req; struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct pci_dev *pdev; @@ -1409,7 +1410,7 @@ static struct aead_alg otx_cpt_aeads[] = { { .cra_driver_name = "cpt_hmac_sha1_cbc_aes", .cra_blocksize = AES_BLOCK_SIZE, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY, - .cra_ctxsize = sizeof(struct otx_cpt_aead_ctx), + .cra_ctxsize = sizeof(struct otx_cpt_aead_ctx) + CRYPTO_DMA_PADDING, .cra_priority = 4001, .cra_alignmask = 0, .cra_module = THIS_MODULE, @@ -1428,7 +1429,7 @@ static struct aead_alg otx_cpt_aeads[] = { { .cra_driver_name = "cpt_hmac_sha256_cbc_aes", .cra_blocksize = AES_BLOCK_SIZE, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY, - .cra_ctxsize = sizeof(struct otx_cpt_aead_ctx), + .cra_ctxsize = sizeof(struct otx_cpt_aead_ctx) + CRYPTO_DMA_PADDING, .cra_priority = 4001, .cra_alignmask = 0, .cra_module = THIS_MODULE, @@ -1447,7 +1448,7 @@ static struct aead_alg otx_cpt_aeads[] = { { .cra_driver_name = "cpt_hmac_sha384_cbc_aes", .cra_blocksize = AES_BLOCK_SIZE, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY, - .cra_ctxsize = sizeof(struct otx_cpt_aead_ctx), + .cra_ctxsize = sizeof(struct otx_cpt_aead_ctx) + CRYPTO_DMA_PADDING, .cra_priority = 4001, .cra_alignmask = 0, .cra_module = THIS_MODULE, @@ -1466,7 +1467,7 @@ static struct aead_alg otx_cpt_aeads[] = { { .cra_driver_name = "cpt_hmac_sha512_cbc_aes", .cra_blocksize = AES_BLOCK_SIZE, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY, - .cra_ctxsize = sizeof(struct otx_cpt_aead_ctx), + .cra_ctxsize = sizeof(struct otx_cpt_aead_ctx) + CRYPTO_DMA_PADDING, .cra_priority = 4001, .cra_alignmask = 0, .cra_module = THIS_MODULE, @@ -1485,7 +1486,7 @@ static struct aead_alg otx_cpt_aeads[] = { { .cra_driver_name = "cpt_hmac_sha1_ecb_null", .cra_blocksize = 1, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY, - .cra_ctxsize = sizeof(struct otx_cpt_aead_ctx), + .cra_ctxsize = sizeof(struct otx_cpt_aead_ctx) + CRYPTO_DMA_PADDING, .cra_priority = 4001, .cra_alignmask = 0, .cra_module = THIS_MODULE, @@ -1504,7 +1505,7 @@ static struct aead_alg otx_cpt_aeads[] = { { .cra_driver_name = "cpt_hmac_sha256_ecb_null", .cra_blocksize = 1, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY, - .cra_ctxsize = sizeof(struct otx_cpt_aead_ctx), + .cra_ctxsize = sizeof(struct otx_cpt_aead_ctx) + CRYPTO_DMA_PADDING, .cra_priority = 4001, .cra_alignmask = 0, .cra_module = THIS_MODULE, @@ -1523,7 +1524,7 @@ static struct aead_alg otx_cpt_aeads[] = { { .cra_driver_name = "cpt_hmac_sha384_ecb_null", .cra_blocksize = 1, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY, - .cra_ctxsize = sizeof(struct otx_cpt_aead_ctx), + .cra_ctxsize = sizeof(struct otx_cpt_aead_ctx) + CRYPTO_DMA_PADDING, .cra_priority = 4001, .cra_alignmask = 0, .cra_module = THIS_MODULE, @@ -1542,7 +1543,7 @@ static struct aead_alg otx_cpt_aeads[] = { { .cra_driver_name = "cpt_hmac_sha512_ecb_null", .cra_blocksize = 1, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY, - .cra_ctxsize = sizeof(struct otx_cpt_aead_ctx), + .cra_ctxsize = sizeof(struct otx_cpt_aead_ctx) + CRYPTO_DMA_PADDING, .cra_priority = 4001, .cra_alignmask = 0, .cra_module = THIS_MODULE, @@ -1561,7 +1562,7 @@ static struct aead_alg otx_cpt_aeads[] = { { .cra_driver_name = "cpt_rfc4106_gcm_aes", .cra_blocksize = 1, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY, - .cra_ctxsize = sizeof(struct otx_cpt_aead_ctx), + .cra_ctxsize = sizeof(struct otx_cpt_aead_ctx) + CRYPTO_DMA_PADDING, .cra_priority = 4001, .cra_alignmask = 0, .cra_module = THIS_MODULE, From patchwork Fri Dec 2 09:21:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13062457 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CC7DC4167B for ; Fri, 2 Dec 2022 09:21:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DB4826B0080; Fri, 2 Dec 2022 04:21:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D16966B0081; Fri, 2 Dec 2022 04:21:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B67EC8E0001; Fri, 2 Dec 2022 04:21:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 9DBAE6B0080 for ; Fri, 2 Dec 2022 04:21:15 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 69D22140CEE for ; Fri, 2 Dec 2022 09:21:15 +0000 (UTC) X-FDA: 80196822510.05.1A2D242 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by imf03.hostedemail.com (Postfix) with ESMTP id A0F4B2000A for ; Fri, 2 Dec 2022 09:21:14 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=none; spf=pass (imf03.hostedemail.com: domain of herbert@gondor.apana.org.au designates 216.24.177.18 as permitted sender) smtp.mailfrom=herbert@gondor.apana.org.au; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669972875; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references:references; bh=VJlhpafwNr/5T0/qpPxa/qzYJ1XaWtlRjlK4vVem+4I=; b=7ld3Ijpg+c9NmrjD/L43HEXjG9lUohTiKSdjf+iUHDL5r3dy6M69GOzMvkcvo5paMJdjuw l3ALHoUJGuIml2QKrw2Soq2lQtM4NvKf5IgIxN3/jb1EUGyxyHsNK9zuPXE3U3IqCdwfXV A+Har0o+S7OWdHNha9uDFSPCZFAok8I= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none; spf=pass (imf03.hostedemail.com: domain of herbert@gondor.apana.org.au designates 216.24.177.18 as permitted sender) smtp.mailfrom=herbert@gondor.apana.org.au; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669972875; a=rsa-sha256; cv=none; b=mH7jS0ujeN0wX90CpXOC3nigeJb/VkTsrNtsi4bz8fmlApOhM/TlP1jgnT0cOGiEYwc8hm vcZKGdy/1P+nNGKRoiSHVO+KBoTOTxMIIiTX2aqCetPNcCwJndSMIF5hr5t6WIzSAS89YN +hUZ7ovSfXATFvxqZ+URs1uGV2PXA5c= Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1p12EB-003Apr-OA; Fri, 02 Dec 2022 17:21:04 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Fri, 02 Dec 2022 17:21:03 +0800 From: "Herbert Xu" Date: Fri, 02 Dec 2022 17:21:03 +0800 Subject: [PATCH 9/10] crypto: octeontx2 - Set DMA alignment explicitly References: To: Catalin Marinas , Ard Biesheuvel , Will Deacon , Marc Zyngier , Arnd Bergmann , Greg Kroah-Hartman , Andrew Morton , Linus Torvalds , Linux Memory Management List , Linux ARM , Linux Kernel Mailing List , "David S. Miller" , Linux Crypto Mailing List Message-Id: X-Spamd-Result: default: False [-3.20 / 9.00]; BAYES_HAM(-3.00)[99.99%]; R_SPF_ALLOW(-0.20)[+mx:c]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; FROM_EQ_ENVFROM(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; R_DKIM_NA(0.00)[]; TO_DN_ALL(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; DMARC_NA(0.00)[apana.org.au]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-Rspamd-Queue-Id: A0F4B2000A X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: rmn5mjz4mjc7gn5nb4yqp8jsc54pqzrp X-HE-Tag: 1669972874-113265 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This driver has been implicitly relying on kmalloc alignment to be sufficient for DMA. This may no longer be the case with upcoming arm64 changes. This patch changes it to explicitly request DMA alignment from the Crypto API. Signed-off-by: Herbert Xu --- drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c | 79 ++++++++++----------- 1 file changed, 40 insertions(+), 39 deletions(-) diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c b/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c index 67530e90bbfe..30b423605c9c 100644 --- a/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c @@ -87,7 +87,7 @@ static inline int validate_hmac_cipher_null(struct otx2_cpt_req_info *cpt_req) req = container_of(cpt_req->areq, struct aead_request, base); tfm = crypto_aead_reqtfm(req); - rctx = aead_request_ctx(req); + rctx = aead_request_ctx_dma(req); if (memcmp(rctx->fctx.hmac.s.hmac_calc, rctx->fctx.hmac.s.hmac_recv, crypto_aead_authsize(tfm)) != 0) @@ -137,7 +137,7 @@ static void output_iv_copyback(struct crypto_async_request *areq) ctx = crypto_skcipher_ctx(stfm); if (ctx->cipher_type == OTX2_CPT_AES_CBC || ctx->cipher_type == OTX2_CPT_DES3_CBC) { - rctx = skcipher_request_ctx(sreq); + rctx = skcipher_request_ctx_dma(sreq); req_info = &rctx->cpt_req; ivsize = crypto_skcipher_ivsize(stfm); start = sreq->cryptlen - ivsize; @@ -219,7 +219,7 @@ static inline int create_ctx_hdr(struct skcipher_request *req, u32 enc, u32 *argcnt) { struct crypto_skcipher *stfm = crypto_skcipher_reqtfm(req); - struct otx2_cpt_req_ctx *rctx = skcipher_request_ctx(req); + struct otx2_cpt_req_ctx *rctx = skcipher_request_ctx_dma(req); struct otx2_cpt_enc_ctx *ctx = crypto_skcipher_ctx(stfm); struct otx2_cpt_req_info *req_info = &rctx->cpt_req; struct otx2_cpt_fc_ctx *fctx = &rctx->fctx; @@ -288,7 +288,7 @@ static inline int create_ctx_hdr(struct skcipher_request *req, u32 enc, static inline int create_input_list(struct skcipher_request *req, u32 enc, u32 enc_iv_len) { - struct otx2_cpt_req_ctx *rctx = skcipher_request_ctx(req); + struct otx2_cpt_req_ctx *rctx = skcipher_request_ctx_dma(req); struct otx2_cpt_req_info *req_info = &rctx->cpt_req; u32 argcnt = 0; int ret; @@ -306,7 +306,7 @@ static inline int create_input_list(struct skcipher_request *req, u32 enc, static inline void create_output_list(struct skcipher_request *req, u32 enc_iv_len) { - struct otx2_cpt_req_ctx *rctx = skcipher_request_ctx(req); + struct otx2_cpt_req_ctx *rctx = skcipher_request_ctx_dma(req); struct otx2_cpt_req_info *req_info = &rctx->cpt_req; u32 argcnt = 0; @@ -325,7 +325,7 @@ static inline void create_output_list(struct skcipher_request *req, static int skcipher_do_fallback(struct skcipher_request *req, bool is_enc) { struct crypto_skcipher *stfm = crypto_skcipher_reqtfm(req); - struct otx2_cpt_req_ctx *rctx = skcipher_request_ctx(req); + struct otx2_cpt_req_ctx *rctx = skcipher_request_ctx_dma(req); struct otx2_cpt_enc_ctx *ctx = crypto_skcipher_ctx(stfm); int ret; @@ -348,7 +348,7 @@ static int skcipher_do_fallback(struct skcipher_request *req, bool is_enc) static inline int cpt_enc_dec(struct skcipher_request *req, u32 enc) { struct crypto_skcipher *stfm = crypto_skcipher_reqtfm(req); - struct otx2_cpt_req_ctx *rctx = skcipher_request_ctx(req); + struct otx2_cpt_req_ctx *rctx = skcipher_request_ctx_dma(req); struct otx2_cpt_enc_ctx *ctx = crypto_skcipher_ctx(stfm); struct otx2_cpt_req_info *req_info = &rctx->cpt_req; u32 enc_iv_len = crypto_skcipher_ivsize(stfm); @@ -537,8 +537,9 @@ static int otx2_cpt_enc_dec_init(struct crypto_skcipher *stfm) * allocated since the cryptd daemon uses * this memory for request_ctx information */ - crypto_skcipher_set_reqsize(stfm, sizeof(struct otx2_cpt_req_ctx) + - sizeof(struct skcipher_request)); + crypto_skcipher_set_reqsize_dma( + stfm, sizeof(struct otx2_cpt_req_ctx) + + sizeof(struct skcipher_request)); return cpt_skcipher_fallback_init(ctx, alg); } @@ -572,7 +573,7 @@ static int cpt_aead_fallback_init(struct otx2_cpt_aead_ctx *ctx, static int cpt_aead_init(struct crypto_aead *atfm, u8 cipher_type, u8 mac_type) { - struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(atfm); + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(atfm); struct crypto_tfm *tfm = crypto_aead_tfm(atfm); struct crypto_alg *alg = tfm->__crt_alg; @@ -629,7 +630,7 @@ static int cpt_aead_init(struct crypto_aead *atfm, u8 cipher_type, u8 mac_type) ctx->enc_align_len = 1; break; } - crypto_aead_set_reqsize(atfm, sizeof(struct otx2_cpt_req_ctx)); + crypto_aead_set_reqsize_dma(atfm, sizeof(struct otx2_cpt_req_ctx)); return cpt_aead_fallback_init(ctx, alg); } @@ -681,7 +682,7 @@ static int otx2_cpt_aead_gcm_aes_init(struct crypto_aead *tfm) static void otx2_cpt_aead_exit(struct crypto_aead *tfm) { - struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(tfm); kfree(ctx->ipad); kfree(ctx->opad); @@ -698,7 +699,7 @@ static void otx2_cpt_aead_exit(struct crypto_aead *tfm) static int otx2_cpt_aead_gcm_set_authsize(struct crypto_aead *tfm, unsigned int authsize) { - struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(tfm); if (crypto_rfc4106_check_authsize(authsize)) return -EINVAL; @@ -722,7 +723,7 @@ static int otx2_cpt_aead_set_authsize(struct crypto_aead *tfm, static int otx2_cpt_aead_null_set_authsize(struct crypto_aead *tfm, unsigned int authsize) { - struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(tfm); ctx->is_trunc_hmac = true; tfm->authsize = authsize; @@ -794,7 +795,7 @@ static int copy_pad(u8 mac_type, u8 *out_pad, u8 *in_pad) static int aead_hmac_init(struct crypto_aead *cipher) { - struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(cipher); + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(cipher); int state_size = crypto_shash_statesize(ctx->hashalg); int ds = crypto_shash_digestsize(ctx->hashalg); int bs = crypto_shash_blocksize(ctx->hashalg); @@ -892,7 +893,7 @@ static int otx2_cpt_aead_cbc_aes_sha_setkey(struct crypto_aead *cipher, const unsigned char *key, unsigned int keylen) { - struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(cipher); + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(cipher); struct crypto_authenc_key_param *param; int enckeylen = 0, authkeylen = 0; struct rtattr *rta = (void *)key; @@ -944,7 +945,7 @@ static int otx2_cpt_aead_ecb_null_sha_setkey(struct crypto_aead *cipher, const unsigned char *key, unsigned int keylen) { - struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(cipher); + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(cipher); struct crypto_authenc_key_param *param; struct rtattr *rta = (void *)key; int enckeylen = 0; @@ -979,7 +980,7 @@ static int otx2_cpt_aead_gcm_aes_setkey(struct crypto_aead *cipher, const unsigned char *key, unsigned int keylen) { - struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(cipher); + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(cipher); /* * For aes gcm we expect to get encryption key (16, 24, 32 bytes) @@ -1012,9 +1013,9 @@ static int otx2_cpt_aead_gcm_aes_setkey(struct crypto_aead *cipher, static inline int create_aead_ctx_hdr(struct aead_request *req, u32 enc, u32 *argcnt) { - struct otx2_cpt_req_ctx *rctx = aead_request_ctx(req); + struct otx2_cpt_req_ctx *rctx = aead_request_ctx_dma(req); struct crypto_aead *tfm = crypto_aead_reqtfm(req); - struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(tfm); struct otx2_cpt_req_info *req_info = &rctx->cpt_req; struct otx2_cpt_fc_ctx *fctx = &rctx->fctx; int mac_len = crypto_aead_authsize(tfm); @@ -1103,9 +1104,9 @@ static inline int create_aead_ctx_hdr(struct aead_request *req, u32 enc, static inline void create_hmac_ctx_hdr(struct aead_request *req, u32 *argcnt, u32 enc) { - struct otx2_cpt_req_ctx *rctx = aead_request_ctx(req); + struct otx2_cpt_req_ctx *rctx = aead_request_ctx_dma(req); struct crypto_aead *tfm = crypto_aead_reqtfm(req); - struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(tfm); struct otx2_cpt_req_info *req_info = &rctx->cpt_req; req_info->ctrl.s.dma_mode = OTX2_CPT_DMA_MODE_SG; @@ -1127,7 +1128,7 @@ static inline void create_hmac_ctx_hdr(struct aead_request *req, u32 *argcnt, static inline int create_aead_input_list(struct aead_request *req, u32 enc) { - struct otx2_cpt_req_ctx *rctx = aead_request_ctx(req); + struct otx2_cpt_req_ctx *rctx = aead_request_ctx_dma(req); struct otx2_cpt_req_info *req_info = &rctx->cpt_req; u32 inputlen = req->cryptlen + req->assoclen; u32 status, argcnt = 0; @@ -1144,7 +1145,7 @@ static inline int create_aead_input_list(struct aead_request *req, u32 enc) static inline void create_aead_output_list(struct aead_request *req, u32 enc, u32 mac_len) { - struct otx2_cpt_req_ctx *rctx = aead_request_ctx(req); + struct otx2_cpt_req_ctx *rctx = aead_request_ctx_dma(req); struct otx2_cpt_req_info *req_info = &rctx->cpt_req; u32 argcnt = 0, outputlen = 0; @@ -1160,7 +1161,7 @@ static inline void create_aead_output_list(struct aead_request *req, u32 enc, static inline void create_aead_null_input_list(struct aead_request *req, u32 enc, u32 mac_len) { - struct otx2_cpt_req_ctx *rctx = aead_request_ctx(req); + struct otx2_cpt_req_ctx *rctx = aead_request_ctx_dma(req); struct otx2_cpt_req_info *req_info = &rctx->cpt_req; u32 inputlen, argcnt = 0; @@ -1177,7 +1178,7 @@ static inline void create_aead_null_input_list(struct aead_request *req, static inline int create_aead_null_output_list(struct aead_request *req, u32 enc, u32 mac_len) { - struct otx2_cpt_req_ctx *rctx = aead_request_ctx(req); + struct otx2_cpt_req_ctx *rctx = aead_request_ctx_dma(req); struct otx2_cpt_req_info *req_info = &rctx->cpt_req; struct scatterlist *dst; u8 *ptr = NULL; @@ -1257,9 +1258,9 @@ static inline int create_aead_null_output_list(struct aead_request *req, static int aead_do_fallback(struct aead_request *req, bool is_enc) { - struct otx2_cpt_req_ctx *rctx = aead_request_ctx(req); + struct otx2_cpt_req_ctx *rctx = aead_request_ctx_dma(req); struct crypto_aead *aead = crypto_aead_reqtfm(req); - struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(aead); + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(aead); int ret; if (ctx->fbk_cipher) { @@ -1281,10 +1282,10 @@ static int aead_do_fallback(struct aead_request *req, bool is_enc) static int cpt_aead_enc_dec(struct aead_request *req, u8 reg_type, u8 enc) { - struct otx2_cpt_req_ctx *rctx = aead_request_ctx(req); + struct otx2_cpt_req_ctx *rctx = aead_request_ctx_dma(req); struct otx2_cpt_req_info *req_info = &rctx->cpt_req; struct crypto_aead *tfm = crypto_aead_reqtfm(req); - struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx_dma(tfm); struct pci_dev *pdev; int status, cpu_num; @@ -1458,7 +1459,7 @@ static struct aead_alg otx2_cpt_aeads[] = { { .cra_driver_name = "cpt_hmac_sha1_cbc_aes", .cra_blocksize = AES_BLOCK_SIZE, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, - .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx), + .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx) + CRYPTO_DMA_PADDING, .cra_priority = 4001, .cra_alignmask = 0, .cra_module = THIS_MODULE, @@ -1477,7 +1478,7 @@ static struct aead_alg otx2_cpt_aeads[] = { { .cra_driver_name = "cpt_hmac_sha256_cbc_aes", .cra_blocksize = AES_BLOCK_SIZE, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, - .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx), + .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx) + CRYPTO_DMA_PADDING, .cra_priority = 4001, .cra_alignmask = 0, .cra_module = THIS_MODULE, @@ -1496,7 +1497,7 @@ static struct aead_alg otx2_cpt_aeads[] = { { .cra_driver_name = "cpt_hmac_sha384_cbc_aes", .cra_blocksize = AES_BLOCK_SIZE, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, - .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx), + .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx) + CRYPTO_DMA_PADDING, .cra_priority = 4001, .cra_alignmask = 0, .cra_module = THIS_MODULE, @@ -1515,7 +1516,7 @@ static struct aead_alg otx2_cpt_aeads[] = { { .cra_driver_name = "cpt_hmac_sha512_cbc_aes", .cra_blocksize = AES_BLOCK_SIZE, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, - .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx), + .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx) + CRYPTO_DMA_PADDING, .cra_priority = 4001, .cra_alignmask = 0, .cra_module = THIS_MODULE, @@ -1534,7 +1535,7 @@ static struct aead_alg otx2_cpt_aeads[] = { { .cra_driver_name = "cpt_hmac_sha1_ecb_null", .cra_blocksize = 1, .cra_flags = CRYPTO_ALG_ASYNC, - .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx), + .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx) + CRYPTO_DMA_PADDING, .cra_priority = 4001, .cra_alignmask = 0, .cra_module = THIS_MODULE, @@ -1553,7 +1554,7 @@ static struct aead_alg otx2_cpt_aeads[] = { { .cra_driver_name = "cpt_hmac_sha256_ecb_null", .cra_blocksize = 1, .cra_flags = CRYPTO_ALG_ASYNC, - .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx), + .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx) + CRYPTO_DMA_PADDING, .cra_priority = 4001, .cra_alignmask = 0, .cra_module = THIS_MODULE, @@ -1572,7 +1573,7 @@ static struct aead_alg otx2_cpt_aeads[] = { { .cra_driver_name = "cpt_hmac_sha384_ecb_null", .cra_blocksize = 1, .cra_flags = CRYPTO_ALG_ASYNC, - .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx), + .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx) + CRYPTO_DMA_PADDING, .cra_priority = 4001, .cra_alignmask = 0, .cra_module = THIS_MODULE, @@ -1591,7 +1592,7 @@ static struct aead_alg otx2_cpt_aeads[] = { { .cra_driver_name = "cpt_hmac_sha512_ecb_null", .cra_blocksize = 1, .cra_flags = CRYPTO_ALG_ASYNC, - .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx), + .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx) + CRYPTO_DMA_PADDING, .cra_priority = 4001, .cra_alignmask = 0, .cra_module = THIS_MODULE, @@ -1610,7 +1611,7 @@ static struct aead_alg otx2_cpt_aeads[] = { { .cra_driver_name = "cpt_rfc4106_gcm_aes", .cra_blocksize = 1, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, - .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx), + .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx) + CRYPTO_DMA_PADDING, .cra_priority = 4001, .cra_alignmask = 0, .cra_module = THIS_MODULE, From patchwork Fri Dec 2 09:21:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13062458 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D78DC4332F for ; Fri, 2 Dec 2022 09:21:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A0A326B0081; Fri, 2 Dec 2022 04:21:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 993F98E0001; Fri, 2 Dec 2022 04:21:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7223F6B0083; Fri, 2 Dec 2022 04:21:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 5BA2E6B0081 for ; Fri, 2 Dec 2022 04:21:17 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 1A25EC08F3 for ; Fri, 2 Dec 2022 09:21:17 +0000 (UTC) X-FDA: 80196822594.21.88A43C7 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by imf15.hostedemail.com (Postfix) with ESMTP id 59964A000E for ; Fri, 2 Dec 2022 09:21:15 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of herbert@gondor.apana.org.au designates 216.24.177.18 as permitted sender) smtp.mailfrom=herbert@gondor.apana.org.au; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669972876; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references:references; bh=N36QrWILrP/rmY2+Cuk8Oe5iEL9gRXUVvOQzO+Bo14U=; b=2/R0W5Tv/oraSgmC+8w7/DhEVEET0fkgPB4JcZ3f3gGbSLcTzwodNh2kz0V6zuBOVMS4gw WxQDrBANhk7bCpQM+3WBnoB/R0/Qp9B2GMhJDE4okczldXO4g8XVn7svb+89PLbDZfgprQ Y506rWxUIBmbk62Td3GUgh60MSQ/2jw= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of herbert@gondor.apana.org.au designates 216.24.177.18 as permitted sender) smtp.mailfrom=herbert@gondor.apana.org.au; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669972876; a=rsa-sha256; cv=none; b=IyI7W3XrF/NCB2MMttIT51C4ddUg2wiloVO722eI9mBwslSrRKcokSFLQsXahtk2VBocA5 XnBQ50tNk4xQbbt079fbogH3Ab0/UzpphnYL8pEDWn5A4mjhSUPYKqjba9f2pD/iugR5X4 wuPin+27833sUP3J21c+0+CocuGx540= Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1p12ED-003AqB-SJ; Fri, 02 Dec 2022 17:21:06 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Fri, 02 Dec 2022 17:21:05 +0800 From: "Herbert Xu" Date: Fri, 02 Dec 2022 17:21:05 +0800 Subject: [PATCH 10/10] crypto: qce - Set DMA alignment explicitly References: To: Catalin Marinas , Ard Biesheuvel , Will Deacon , Marc Zyngier , Arnd Bergmann , Greg Kroah-Hartman , Andrew Morton , Linus Torvalds , Linux Memory Management List , Linux ARM , Linux Kernel Mailing List , "David S. Miller" , Linux Crypto Mailing List Message-Id: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 59964A000E X-Stat-Signature: 4sou9fjwk3u4maiujcuj1fbonj64fs5m X-Rspam-User: X-Spamd-Result: default: False [-3.20 / 9.00]; BAYES_HAM(-3.00)[99.99%]; R_SPF_ALLOW(-0.20)[+mx]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; FROM_EQ_ENVFROM(0.00)[]; RCPT_COUNT_TWELVE(0.00)[13]; R_DKIM_NA(0.00)[]; TO_DN_ALL(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; DMARC_NA(0.00)[apana.org.au]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-HE-Tag: 1669972875-759517 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This driver has been implicitly relying on kmalloc alignment to be sufficient for DMA. This may no longer be the case with upcoming arm64 changes. This patch changes it to explicitly request DMA alignment from the Crypto API. Signed-off-by: Herbert Xu --- drivers/crypto/qce/aead.c | 22 +++++++++++----------- drivers/crypto/qce/common.c | 5 +++-- drivers/crypto/qce/sha.c | 18 +++++++++--------- 3 files changed, 23 insertions(+), 22 deletions(-) diff --git a/drivers/crypto/qce/aead.c b/drivers/crypto/qce/aead.c index 6eb4d2e35629..7d811728f047 100644 --- a/drivers/crypto/qce/aead.c +++ b/drivers/crypto/qce/aead.c @@ -24,7 +24,7 @@ static void qce_aead_done(void *data) { struct crypto_async_request *async_req = data; struct aead_request *req = aead_request_cast(async_req); - struct qce_aead_reqctx *rctx = aead_request_ctx(req); + struct qce_aead_reqctx *rctx = aead_request_ctx_dma(req); struct qce_aead_ctx *ctx = crypto_tfm_ctx(async_req->tfm); struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req)); struct qce_device *qce = tmpl->qce; @@ -92,7 +92,7 @@ static void qce_aead_done(void *data) static struct scatterlist * qce_aead_prepare_result_buf(struct sg_table *tbl, struct aead_request *req) { - struct qce_aead_reqctx *rctx = aead_request_ctx(req); + struct qce_aead_reqctx *rctx = aead_request_ctx_dma(req); struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req)); struct qce_device *qce = tmpl->qce; @@ -103,7 +103,7 @@ qce_aead_prepare_result_buf(struct sg_table *tbl, struct aead_request *req) static struct scatterlist * qce_aead_prepare_ccm_result_buf(struct sg_table *tbl, struct aead_request *req) { - struct qce_aead_reqctx *rctx = aead_request_ctx(req); + struct qce_aead_reqctx *rctx = aead_request_ctx_dma(req); sg_init_one(&rctx->result_sg, rctx->ccmresult_buf, QCE_BAM_BURST_SIZE); return qce_sgtable_add(tbl, &rctx->result_sg, QCE_BAM_BURST_SIZE); @@ -112,7 +112,7 @@ qce_aead_prepare_ccm_result_buf(struct sg_table *tbl, struct aead_request *req) static struct scatterlist * qce_aead_prepare_dst_buf(struct aead_request *req) { - struct qce_aead_reqctx *rctx = aead_request_ctx(req); + struct qce_aead_reqctx *rctx = aead_request_ctx_dma(req); struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req)); struct qce_device *qce = tmpl->qce; struct scatterlist *sg, *msg_sg, __sg[2]; @@ -186,7 +186,7 @@ qce_aead_ccm_prepare_buf_assoclen(struct aead_request *req) { struct scatterlist *sg, *msg_sg, __sg[2]; struct crypto_aead *tfm = crypto_aead_reqtfm(req); - struct qce_aead_reqctx *rctx = aead_request_ctx(req); + struct qce_aead_reqctx *rctx = aead_request_ctx_dma(req); struct qce_aead_ctx *ctx = crypto_aead_ctx(tfm); unsigned int assoclen = rctx->assoclen; unsigned int adata_header_len, cryptlen, totallen; @@ -300,7 +300,7 @@ qce_aead_ccm_prepare_buf_assoclen(struct aead_request *req) static int qce_aead_prepare_buf(struct aead_request *req) { - struct qce_aead_reqctx *rctx = aead_request_ctx(req); + struct qce_aead_reqctx *rctx = aead_request_ctx_dma(req); struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req)); struct qce_device *qce = tmpl->qce; struct scatterlist *sg; @@ -328,7 +328,7 @@ static int qce_aead_prepare_buf(struct aead_request *req) static int qce_aead_ccm_prepare_buf(struct aead_request *req) { - struct qce_aead_reqctx *rctx = aead_request_ctx(req); + struct qce_aead_reqctx *rctx = aead_request_ctx_dma(req); struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct qce_aead_ctx *ctx = crypto_aead_ctx(tfm); struct scatterlist *sg; @@ -408,7 +408,7 @@ static int qce_aead_async_req_handle(struct crypto_async_request *async_req) { struct aead_request *req = aead_request_cast(async_req); - struct qce_aead_reqctx *rctx = aead_request_ctx(req); + struct qce_aead_reqctx *rctx = aead_request_ctx_dma(req); struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct qce_aead_ctx *ctx = crypto_tfm_ctx(async_req->tfm); struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req)); @@ -502,7 +502,7 @@ qce_aead_async_req_handle(struct crypto_async_request *async_req) static int qce_aead_crypt(struct aead_request *req, int encrypt) { struct crypto_aead *tfm = crypto_aead_reqtfm(req); - struct qce_aead_reqctx *rctx = aead_request_ctx(req); + struct qce_aead_reqctx *rctx = aead_request_ctx_dma(req); struct qce_aead_ctx *ctx = crypto_aead_ctx(tfm); struct qce_alg_template *tmpl = to_aead_tmpl(tfm); unsigned int blocksize = crypto_aead_blocksize(tfm); @@ -675,8 +675,8 @@ static int qce_aead_init(struct crypto_aead *tfm) if (IS_ERR(ctx->fallback)) return PTR_ERR(ctx->fallback); - crypto_aead_set_reqsize(tfm, sizeof(struct qce_aead_reqctx) + - crypto_aead_reqsize(ctx->fallback)); + crypto_aead_set_reqsize_dma(tfm, sizeof(struct qce_aead_reqctx) + + crypto_aead_reqsize(ctx->fallback)); return 0; } diff --git a/drivers/crypto/qce/common.c b/drivers/crypto/qce/common.c index 7c612ba5068f..04253a8d3340 100644 --- a/drivers/crypto/qce/common.c +++ b/drivers/crypto/qce/common.c @@ -3,6 +3,7 @@ * Copyright (c) 2012-2014, The Linux Foundation. All rights reserved. */ +#include #include #include #include @@ -147,7 +148,7 @@ static int qce_setup_regs_ahash(struct crypto_async_request *async_req) { struct ahash_request *req = ahash_request_cast(async_req); struct crypto_ahash *ahash = __crypto_ahash_cast(async_req->tfm); - struct qce_sha_reqctx *rctx = ahash_request_ctx(req); + struct qce_sha_reqctx *rctx = ahash_request_ctx_dma(req); struct qce_alg_template *tmpl = to_ahash_tmpl(async_req->tfm); struct qce_device *qce = tmpl->qce; unsigned int digestsize = crypto_ahash_digestsize(ahash); @@ -419,7 +420,7 @@ static unsigned int qce_be32_to_cpu_array(u32 *dst, const u8 *src, unsigned int static int qce_setup_regs_aead(struct crypto_async_request *async_req) { struct aead_request *req = aead_request_cast(async_req); - struct qce_aead_reqctx *rctx = aead_request_ctx(req); + struct qce_aead_reqctx *rctx = aead_request_ctx_dma(req); struct qce_aead_ctx *ctx = crypto_tfm_ctx(async_req->tfm); struct qce_alg_template *tmpl = to_aead_tmpl(crypto_aead_reqtfm(req)); struct qce_device *qce = tmpl->qce; diff --git a/drivers/crypto/qce/sha.c b/drivers/crypto/qce/sha.c index 37bafd7aeb79..fc72af8aa9a7 100644 --- a/drivers/crypto/qce/sha.c +++ b/drivers/crypto/qce/sha.c @@ -38,7 +38,7 @@ static void qce_ahash_done(void *data) struct crypto_async_request *async_req = data; struct ahash_request *req = ahash_request_cast(async_req); struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); - struct qce_sha_reqctx *rctx = ahash_request_ctx(req); + struct qce_sha_reqctx *rctx = ahash_request_ctx_dma(req); struct qce_alg_template *tmpl = to_ahash_tmpl(async_req->tfm); struct qce_device *qce = tmpl->qce; struct qce_result_dump *result = qce->dma.result_buf; @@ -75,7 +75,7 @@ static void qce_ahash_done(void *data) static int qce_ahash_async_req_handle(struct crypto_async_request *async_req) { struct ahash_request *req = ahash_request_cast(async_req); - struct qce_sha_reqctx *rctx = ahash_request_ctx(req); + struct qce_sha_reqctx *rctx = ahash_request_ctx_dma(req); struct qce_sha_ctx *ctx = crypto_tfm_ctx(async_req->tfm); struct qce_alg_template *tmpl = to_ahash_tmpl(async_req->tfm); struct qce_device *qce = tmpl->qce; @@ -132,7 +132,7 @@ static int qce_ahash_async_req_handle(struct crypto_async_request *async_req) static int qce_ahash_init(struct ahash_request *req) { - struct qce_sha_reqctx *rctx = ahash_request_ctx(req); + struct qce_sha_reqctx *rctx = ahash_request_ctx_dma(req); struct qce_alg_template *tmpl = to_ahash_tmpl(req->base.tfm); const u32 *std_iv = tmpl->std_iv; @@ -147,7 +147,7 @@ static int qce_ahash_init(struct ahash_request *req) static int qce_ahash_export(struct ahash_request *req, void *out) { - struct qce_sha_reqctx *rctx = ahash_request_ctx(req); + struct qce_sha_reqctx *rctx = ahash_request_ctx_dma(req); struct qce_sha_saved_state *export_state = out; memcpy(export_state->pending_buf, rctx->buf, rctx->buflen); @@ -164,7 +164,7 @@ static int qce_ahash_export(struct ahash_request *req, void *out) static int qce_ahash_import(struct ahash_request *req, const void *in) { - struct qce_sha_reqctx *rctx = ahash_request_ctx(req); + struct qce_sha_reqctx *rctx = ahash_request_ctx_dma(req); const struct qce_sha_saved_state *import_state = in; memset(rctx, 0, sizeof(*rctx)); @@ -183,7 +183,7 @@ static int qce_ahash_import(struct ahash_request *req, const void *in) static int qce_ahash_update(struct ahash_request *req) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct qce_sha_reqctx *rctx = ahash_request_ctx(req); + struct qce_sha_reqctx *rctx = ahash_request_ctx_dma(req); struct qce_alg_template *tmpl = to_ahash_tmpl(req->base.tfm); struct qce_device *qce = tmpl->qce; struct scatterlist *sg_last, *sg; @@ -275,7 +275,7 @@ static int qce_ahash_update(struct ahash_request *req) static int qce_ahash_final(struct ahash_request *req) { - struct qce_sha_reqctx *rctx = ahash_request_ctx(req); + struct qce_sha_reqctx *rctx = ahash_request_ctx_dma(req); struct qce_alg_template *tmpl = to_ahash_tmpl(req->base.tfm); struct qce_device *qce = tmpl->qce; @@ -302,7 +302,7 @@ static int qce_ahash_final(struct ahash_request *req) static int qce_ahash_digest(struct ahash_request *req) { - struct qce_sha_reqctx *rctx = ahash_request_ctx(req); + struct qce_sha_reqctx *rctx = ahash_request_ctx_dma(req); struct qce_alg_template *tmpl = to_ahash_tmpl(req->base.tfm); struct qce_device *qce = tmpl->qce; int ret; @@ -395,7 +395,7 @@ static int qce_ahash_cra_init(struct crypto_tfm *tfm) struct crypto_ahash *ahash = __crypto_ahash_cast(tfm); struct qce_sha_ctx *ctx = crypto_tfm_ctx(tfm); - crypto_ahash_set_reqsize(ahash, sizeof(struct qce_sha_reqctx)); + crypto_ahash_set_reqsize_dma(ahash, sizeof(struct qce_sha_reqctx)); memset(ctx, 0, sizeof(*ctx)); return 0; }