From patchwork Sat Dec 21 06:31:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 13917680 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AC3CE77184 for ; Sat, 21 Dec 2024 06:31:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E1A5C6B008C; Sat, 21 Dec 2024 01:31:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D7C2D6B0092; Sat, 21 Dec 2024 01:31:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BF6CB6B0093; Sat, 21 Dec 2024 01:31:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 917456B008C for ; Sat, 21 Dec 2024 01:31:27 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 59E64B0612 for ; Sat, 21 Dec 2024 06:31:27 +0000 (UTC) X-FDA: 82917992136.17.97D00FA Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.20]) by imf05.hostedemail.com (Postfix) with ESMTP id C9C1A10000C for ; Sat, 21 Dec 2024 06:30:17 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=DJNUfBMQ; spf=pass (imf05.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.20 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734762669; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FekmMaU/ZbOa6kybPDIU39axSnPBThgmcrq4UdkA4Cw=; b=etiA4/PH1WFbhYMYxjXOVdTEkbcFRvOBwWZmBq+o0NeK7erqLba/8hisWoM34vhITxga7B Cnn8yh/cPmB8h1xFSaZeSAJvLQc8b5YzWAi/nWaNalP8Rlmd9+0w5cOFAyfog2dkUl9bUF BE6gsaAH479bYRnfxRwkIpXzRPK/37Y= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=DJNUfBMQ; spf=pass (imf05.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.20 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734762669; a=rsa-sha256; cv=none; b=amQCsXq+gjJXKVr7Nni7N1L0nuKKObO20HDwd6xNbIG9DZJjyp/1U09/HGIq30NuWVuNra Qcu/c5SJrOn7G+nf68UoeSjrR1aXVzTn7RVUu8lQhLHzeKzpkl5gyoNYWePYyL4UmQWUl8 /Wsb0+dgK0vfHOg3b1Y7Tp85IZRRSRA= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1734762685; x=1766298685; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ybj/ayHmpbuA6aY9ACaZeRej9qfykY7Qa0yAfnK5XSM=; b=DJNUfBMQMd/6b4VRYSrjKXUthesT+kHngSiOLoAcpj0LiwZ/vfdOnqAg gm2cnnkyG4VXb8n3wEAgXlk4k0UmlyXCFyXfq8h2HxuCUz3Vfopzj6rCE LdVeyk/a2LWncYj54cRraRXfhhoPsADv+AavR5KZQQk2kyuf4vcneaCUg jZkEc2t30P2LF0RyAsnGPeloSiRDgI3zDQJQFcEaf2jeJSxXwDpoNz5YN 3sGbVAu/6/5bL4TGzobQ7B/Ec8qEVkBb+9+GF2gnrb1Kh2ne24HIcJ3UV KH1Mwvzr08h8VJBGIp+PnqO9G2me9U8ED9EuecfixZp9UFk+PVX52+pj6 g==; X-CSE-ConnectionGUID: jF275Y+qQ+KzJq7DQ0kpzQ== X-CSE-MsgGUID: ayjEluGNRgeYslqb8VUlLQ== X-IronPort-AV: E=McAfee;i="6700,10204,11292"; a="35021647" X-IronPort-AV: E=Sophos;i="6.12,253,1728975600"; d="scan'208";a="35021647" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa112.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Dec 2024 22:31:20 -0800 X-CSE-ConnectionGUID: CeK+YvJ2QtOf6+gzycQ7nA== X-CSE-MsgGUID: GItpzuP8SdWDHnGWUeqQQA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="99184585" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by orviesa007.jf.intel.com with ESMTP; 20 Dec 2024 22:31:20 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v5 04/12] crypto: iaa - Implement batch_compress(), batch_decompress() API in iaa_crypto. Date: Fri, 20 Dec 2024 22:31:11 -0800 Message-Id: <20241221063119.29140-5-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241221063119.29140-1-kanchana.p.sridhar@intel.com> References: <20241221063119.29140-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: C9C1A10000C X-Stat-Signature: nipgbn9tuqh16mhyz556xntsxr9p5bec X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1734762617-291362 X-HE-Meta: U2FsdGVkX1+VIyAA1aOaPPVsV+mlDycGdDeFcr0oZ1FrX2JTWFM3h8CqthUAwEBlJ2UkV8nMq3AkYbP0S0SVlyLMZvggMU9bhwJGXX5K1LWgJvTyf8W//MjkiXUaViQU/L1z89uYi31T/jZkuW6hh8NXFWZI9NdGDDVH7i00Qp8QbsyBKnhnJlOAO06K4JMbFRoAnfhlTntB8uvNQqpwufBVQU6dhxshvN4lEIHBzLRA3CgCiphP3s00tyezhQxDqhD1CSQ3cUte/wBgO5rm+TLXf43DBItWv9K1YE5kaoVN9InXyiGmc+m9fI0+vdQ9YPlOoUkqc5xoqhHyJRMWLY/yMRPRq/T7c/0DUteXVEHshw47W4YET5V/+ge7si8nsxRo7aKMbwzPzArityHBaid6X3TQxm21EW5bTTXz8/htdqOHFXN3VDXuMf8GDLijYYFIM4lY8Fhc6DyYnpn8LUPi8onZ4hp3YR8nJGlPmVjNg4/p228kwEqPAGQP5u98C/HmWeDRmYf/KyUY0inmhmCo83+5FyXQjn3SZvTXuTHP/jnV3Vjgmy+rJjpKhqnCIqbJBy2QMIBkEezBHUwm3LEHn1yjSjoRUA0wHUqxXomghTywTuirbreV1DlxgFAbRy814yGNag2d7iicVrI6OZ7FIvK3FVu52iAQwC2ZNicaMklm0omJSZZQOROW3oSj0gcmnW9xWT1l2ZRI0ldWc1QnT1R6SLW2Gn+RSiDkaXpED0b/EYSz3MWfeDbPYFeVlik9QKLM1EEPQU8lSUapEPWqcYV/vPy93KW702dwO/COmiFPSu6LvuaSJw0Tgrex05OMJHhRJJ43tTRj+Aw3Q6Fj/WPsH1EsCQjOraBDjbe/yq//Nca6YEhpRgJHi9J+8MK2wgSsz0dlOtAZ2lztT8lFvJPgXrgO9EP42Lo8jzOEXHqsieQL3lFOwB9sC820DmsvA+Rj3feS0ifdNh9 lDQPTS7Y U/m7cy8g+boch+FOoaMJKGpYcrtEkOkLRNBcWT431yWmiiutg3UnazWCYObvRGyltDPecNE7uNVowEIoYlBTO5ytTZS+iKmE3QTgmBdfBTy/xGxcCPKecB2qvNNmPRBnQ7scbQygMJ/+CdFjfjqMbWL+nKYi7i3QfGz6+m/w8W5eLDvX5QcPrh20CTntK4X3bLWntRjp65Wb71by+4QmaHyTXP0pzY0KzeB/taV6+s1pZdA7yyUqc/jgegojWjS0tRuxeEkXw64ZNrTs06muOiUIaE7Oc+U9HJ7iElowLRYVwDZN6H7n8yam2+w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch provides iaa_crypto driver implementations for the newly added crypto_acomp batch_compress() and batch_decompress() interfaces using acomp request chaining. iaa_crypto also implements the new crypto_acomp get_batch_size() interface that returns an iaa_driver specific constant, IAA_CRYPTO_MAX_BATCH_SIZE (set to 8U currently). This allows swap modules such as zswap/zram to allocate required batching resources and then invoke fully asynchronous batch parallel compression/decompression of pages on systems with Intel IAA, by invoking these API, respectively: crypto_acomp_batch_size(...); crypto_acomp_batch_compress(...); crypto_acomp_batch_decompress(...); This enables zswap compress batching code to be developed in a manner similar to the current single-page synchronous calls to: crypto_acomp_compress(...); crypto_acomp_decompress(...); thereby, facilitating encapsulated and modular hand-off between the kernel zswap/zram code and the crypto_acomp layer. Since iaa_crypto supports the use of acomp request chaining, this patch also adds CRYPTO_ALG_REQ_CHAIN to the iaa_acomp_fixed_deflate algorithm's cra_flags. Suggested-by: Yosry Ahmed Suggested-by: Herbert Xu Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto.h | 9 + drivers/crypto/intel/iaa/iaa_crypto_main.c | 395 ++++++++++++++++++++- 2 files changed, 403 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto.h b/drivers/crypto/intel/iaa/iaa_crypto.h index 56985e395263..b3b67c44ec8a 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto.h +++ b/drivers/crypto/intel/iaa/iaa_crypto.h @@ -39,6 +39,15 @@ IAA_DECOMP_CHECK_FOR_EOB | \ IAA_DECOMP_STOP_ON_EOB) +/* + * The maximum compress/decompress batch size for IAA's implementation of + * the crypto_acomp batch_compress() and batch_decompress() interfaces. + * The IAA compression algorithms should provide the crypto_acomp + * get_batch_size() interface through a function that returns this + * constant. + */ +#define IAA_CRYPTO_MAX_BATCH_SIZE 8U + /* Representation of IAA workqueue */ struct iaa_wq { struct list_head list; diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index 29d03df39fab..b51b0b4b9ac3 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -1807,6 +1807,396 @@ static void compression_ctx_init(struct iaa_compression_ctx *ctx) ctx->use_irq = use_irq; } +static int iaa_comp_poll(struct acomp_req *req) +{ + struct idxd_desc *idxd_desc; + struct idxd_device *idxd; + struct iaa_wq *iaa_wq; + struct pci_dev *pdev; + struct device *dev; + struct idxd_wq *wq; + bool compress_op; + int ret; + + idxd_desc = req->base.data; + if (!idxd_desc) + return -EAGAIN; + + compress_op = (idxd_desc->iax_hw->opcode == IAX_OPCODE_COMPRESS); + wq = idxd_desc->wq; + iaa_wq = idxd_wq_get_private(wq); + idxd = iaa_wq->iaa_device->idxd; + pdev = idxd->pdev; + dev = &pdev->dev; + + ret = check_completion(dev, idxd_desc->iax_completion, true, true); + if (ret == -EAGAIN) + return ret; + if (ret) + goto out; + + req->dlen = idxd_desc->iax_completion->output_size; + + /* Update stats */ + if (compress_op) { + update_total_comp_bytes_out(req->dlen); + update_wq_comp_bytes(wq, req->dlen); + } else { + update_total_decomp_bytes_in(req->slen); + update_wq_decomp_bytes(wq, req->slen); + } + + if (iaa_verify_compress && (idxd_desc->iax_hw->opcode == IAX_OPCODE_COMPRESS)) { + struct crypto_tfm *tfm = req->base.tfm; + dma_addr_t src_addr, dst_addr; + u32 compression_crc; + + compression_crc = idxd_desc->iax_completion->crc; + + dma_sync_sg_for_device(dev, req->dst, 1, DMA_FROM_DEVICE); + dma_sync_sg_for_device(dev, req->src, 1, DMA_TO_DEVICE); + + src_addr = sg_dma_address(req->src); + dst_addr = sg_dma_address(req->dst); + + ret = iaa_compress_verify(tfm, req, wq, src_addr, req->slen, + dst_addr, &req->dlen, compression_crc); + } +out: + /* caller doesn't call crypto_wait_req, so no acomp_request_complete() */ + + dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE); + dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE); + + idxd_free_desc(idxd_desc->wq, idxd_desc); + + dev_dbg(dev, "%s: returning ret=%d\n", __func__, ret); + + return ret; +} + +static unsigned int iaa_comp_get_batch_size(void) +{ + return IAA_CRYPTO_MAX_BATCH_SIZE; +} + +static void iaa_set_req_poll( + struct acomp_req *reqs[], + int nr_reqs, + bool set_flag) +{ + int i; + + for (i = 0; i < nr_reqs; ++i) { + set_flag ? (reqs[i]->flags |= CRYPTO_ACOMP_REQ_POLL) : + (reqs[i]->flags &= ~CRYPTO_ACOMP_REQ_POLL); + } +} + +/** + * This API provides IAA compress batching functionality for use by swap + * modules. + * + * @reqs: @nr_pages asynchronous compress requests. + * @wait: crypto_wait for acomp batch compress implemented using request + * chaining. Required if async_mode is "false". If async_mode is "true", + * and @wait is NULL, the completions will be processed using + * asynchronous polling of the requests' completion statuses. + * @pages: Pages to be compressed by IAA. + * @dsts: Pre-allocated destination buffers to store results of IAA + * compression. Each element of @dsts must be of size "PAGE_SIZE * 2". + * @dlens: Will contain the compressed lengths. + * @errors: zero on successful compression of the corresponding + * req, or error code in case of error. + * @nr_pages: The number of pages, up to IAA_CRYPTO_MAX_BATCH_SIZE, + * to be compressed. + * + * Returns true if all compress requests complete successfully, + * false otherwise. + */ +static bool iaa_comp_acompress_batch( + struct acomp_req *reqs[], + struct crypto_wait *wait, + struct page *pages[], + u8 *dsts[], + unsigned int dlens[], + int errors[], + int nr_pages) +{ + struct scatterlist inputs[IAA_CRYPTO_MAX_BATCH_SIZE]; + struct scatterlist outputs[IAA_CRYPTO_MAX_BATCH_SIZE]; + bool compressions_done = false; + bool async = (async_mode && !use_irq); + bool async_poll = (async && !wait); + int i, err = 0; + + BUG_ON(nr_pages > IAA_CRYPTO_MAX_BATCH_SIZE); + BUG_ON(!async && !wait); + + if (async) + iaa_set_req_poll(reqs, nr_pages, true); + else + iaa_set_req_poll(reqs, nr_pages, false); + + /* + * Prepare and submit acomp_reqs to IAA. IAA will process these + * compress jobs in parallel if async_mode is true. + */ + for (i = 0; i < nr_pages; ++i) { + sg_init_table(&inputs[i], 1); + sg_set_page(&inputs[i], pages[i], PAGE_SIZE, 0); + + /* + * Each dst buffer should be of size (PAGE_SIZE * 2). + * Reflect same in sg_list. + */ + sg_init_one(&outputs[i], dsts[i], PAGE_SIZE * 2); + acomp_request_set_params(reqs[i], &inputs[i], + &outputs[i], PAGE_SIZE, dlens[i]); + + /* + * As long as the API is called with a valid "wait", chain the + * requests for synchronous/asynchronous compress ops. + * If async_mode is in effect, but the API is called with a + * NULL "wait", submit the requests first, and poll for + * their completion status later, after all descriptors have + * been submitted. + */ + if (!async_poll) { + /* acomp request chaining. */ + if (i) + acomp_request_chain(reqs[i], reqs[0]); + else + acomp_reqchain_init(reqs[0], 0, crypto_req_done, + wait); + } else { + errors[i] = iaa_comp_acompress(reqs[i]); + + if (errors[i] != -EINPROGRESS) { + errors[i] = -EINVAL; + err = -EINVAL; + } else { + errors[i] = -EAGAIN; + } + } + } + + if (!async_poll) { + if (async) + /* Process the request chain in parallel. */ + err = crypto_wait_req(acomp_do_async_req_chain(reqs[0], + iaa_comp_acompress, iaa_comp_poll), + wait); + else + /* Process the request chain in series. */ + err = crypto_wait_req(acomp_do_req_chain(reqs[0], + iaa_comp_acompress), wait); + + for (i = 0; i < nr_pages; ++i) { + errors[i] = acomp_request_err(reqs[i]); + if (errors[i]) { + err = -EINVAL; + pr_debug("Request chaining req %d compress error %d\n", i, errors[i]); + } else { + dlens[i] = reqs[i]->dlen; + } + } + + goto reset_reqs; + } + + /* + * Asynchronously poll for and process IAA compress job completions. + */ + while (!compressions_done) { + compressions_done = true; + + for (i = 0; i < nr_pages; ++i) { + /* + * Skip, if the compression has already completed + * successfully or with an error. + */ + if (errors[i] != -EAGAIN) + continue; + + errors[i] = iaa_comp_poll(reqs[i]); + + if (errors[i]) { + if (errors[i] == -EAGAIN) + compressions_done = false; + else + err = -EINVAL; + } else { + dlens[i] = reqs[i]->dlen; + } + } + } + +reset_reqs: + /* + * For the same 'reqs[]' to be usable by + * iaa_comp_acompress()/iaa_comp_deacompress(), + * clear the CRYPTO_ACOMP_REQ_POLL bit on all acomp_reqs, and the + * CRYPTO_TFM_REQ_CHAIN bit on the reqs[0]. + */ + iaa_set_req_poll(reqs, nr_pages, false); + if (!async_poll) + acomp_reqchain_clear(reqs[0], wait); + + return !err; +} + +/** + * This API provides IAA decompress batching functionality for use by swap + * modules. + * + * @reqs: @nr_pages asynchronous decompress requests. + * @wait: crypto_wait for acomp batch decompress implemented using request + * chaining. Required if async_mode is "false". If async_mode is "true", + * and @wait is NULL, the completions will be processed using + * asynchronous polling of the requests' completion statuses. + * @srcs: The src buffers to be decompressed by IAA. + * @pages: The pages to store the decompressed buffers. + * @slens: Compressed lengths of @srcs. + * @errors: zero on successful compression of the corresponding + * req, or error code in case of error. + * @nr_pages: The number of pages, up to IAA_CRYPTO_MAX_BATCH_SIZE, + * to be decompressed. + * + * Returns true if all decompress requests complete successfully, + * false otherwise. + */ +static bool iaa_comp_adecompress_batch( + struct acomp_req *reqs[], + struct crypto_wait *wait, + u8 *srcs[], + struct page *pages[], + unsigned int slens[], + int errors[], + int nr_pages) +{ + struct scatterlist inputs[IAA_CRYPTO_MAX_BATCH_SIZE]; + struct scatterlist outputs[IAA_CRYPTO_MAX_BATCH_SIZE]; + unsigned int dlens[IAA_CRYPTO_MAX_BATCH_SIZE]; + bool decompressions_done = false; + bool async = (async_mode && !use_irq); + bool async_poll = (async && !wait); + int i, err = 0; + + BUG_ON(nr_pages > IAA_CRYPTO_MAX_BATCH_SIZE); + BUG_ON(!async && !wait); + + if (async) + iaa_set_req_poll(reqs, nr_pages, true); + else + iaa_set_req_poll(reqs, nr_pages, false); + + /* + * Prepare and submit acomp_reqs to IAA. IAA will process these + * decompress jobs in parallel if async_mode is true. + */ + for (i = 0; i < nr_pages; ++i) { + dlens[i] = PAGE_SIZE; + sg_init_one(&inputs[i], srcs[i], slens[i]); + sg_init_table(&outputs[i], 1); + sg_set_page(&outputs[i], pages[i], PAGE_SIZE, 0); + acomp_request_set_params(reqs[i], &inputs[i], + &outputs[i], slens[i], dlens[i]); + + /* + * As long as the API is called with a valid "wait", chain the + * requests for synchronous/asynchronous decompress ops. + * If async_mode is in effect, but the API is called with a + * NULL "wait", submit the requests first, and poll for + * their completion status later, after all descriptors have + * been submitted. + */ + if (!async_poll) { + /* acomp request chaining. */ + if (i) + acomp_request_chain(reqs[i], reqs[0]); + else + acomp_reqchain_init(reqs[0], 0, crypto_req_done, + wait); + } else { + errors[i] = iaa_comp_adecompress(reqs[i]); + + if (errors[i] != -EINPROGRESS) { + errors[i] = -EINVAL; + err = -EINVAL; + } else { + errors[i] = -EAGAIN; + } + } + } + + if (!async_poll) { + if (async) + /* Process the request chain in parallel. */ + err = crypto_wait_req(acomp_do_async_req_chain(reqs[0], + iaa_comp_adecompress, iaa_comp_poll), + wait); + else + /* Process the request chain in series. */ + err = crypto_wait_req(acomp_do_req_chain(reqs[0], + iaa_comp_adecompress), wait); + + for (i = 0; i < nr_pages; ++i) { + errors[i] = acomp_request_err(reqs[i]); + if (errors[i]) { + err = -EINVAL; + pr_debug("Request chaining req %d decompress error %d\n", i, errors[i]); + } else { + dlens[i] = reqs[i]->dlen; + BUG_ON(dlens[i] != PAGE_SIZE); + } + } + + goto reset_reqs; + } + + /* + * Asynchronously poll for and process IAA decompress job completions. + */ + while (!decompressions_done) { + decompressions_done = true; + + for (i = 0; i < nr_pages; ++i) { + /* + * Skip, if the decompression has already completed + * successfully or with an error. + */ + if (errors[i] != -EAGAIN) + continue; + + errors[i] = iaa_comp_poll(reqs[i]); + + if (errors[i]) { + if (errors[i] == -EAGAIN) + decompressions_done = false; + else + err = -EINVAL; + } else { + dlens[i] = reqs[i]->dlen; + BUG_ON(dlens[i] != PAGE_SIZE); + } + } + } + +reset_reqs: + /* + * For the same 'reqs[]' to be usable by + * iaa_comp_acompress()/iaa_comp_deacompress(), + * clear the CRYPTO_ACOMP_REQ_POLL bit on all acomp_reqs, and the + * CRYPTO_TFM_REQ_CHAIN bit on the reqs[0]. + */ + iaa_set_req_poll(reqs, nr_pages, false); + if (!async_poll) + acomp_reqchain_clear(reqs[0], wait); + + return !err; +} + static int iaa_comp_init_fixed(struct crypto_acomp *acomp_tfm) { struct crypto_tfm *tfm = crypto_acomp_tfm(acomp_tfm); @@ -1832,10 +2222,13 @@ static struct acomp_alg iaa_acomp_fixed_deflate = { .compress = iaa_comp_acompress, .decompress = iaa_comp_adecompress, .dst_free = dst_free, + .get_batch_size = iaa_comp_get_batch_size, + .batch_compress = iaa_comp_acompress_batch, + .batch_decompress = iaa_comp_adecompress_batch, .base = { .cra_name = "deflate", .cra_driver_name = "deflate-iaa", - .cra_flags = CRYPTO_ALG_ASYNC, + .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_REQ_CHAIN, .cra_ctxsize = sizeof(struct iaa_compression_ctx), .cra_module = THIS_MODULE, .cra_priority = IAA_ALG_PRIORITY,