From patchwork Thu Feb 6 07:21:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13962465 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A07FC02198 for ; Thu, 6 Feb 2025 07:21:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B3C1428000D; Thu, 6 Feb 2025 02:21:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AED1228000B; Thu, 6 Feb 2025 02:21:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8F24B28000D; Thu, 6 Feb 2025 02:21:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 67A8428000B for ; Thu, 6 Feb 2025 02:21:19 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 240C8140EB4 for ; Thu, 6 Feb 2025 07:21:19 +0000 (UTC) X-FDA: 83088673878.30.91D483E Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by imf08.hostedemail.com (Postfix) with ESMTP id 0FE36160002 for ; Thu, 6 Feb 2025 07:21:16 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="SMp/7Pap"; spf=pass (imf08.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.8 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738826477; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IDldilDkUG1d2Q+acSDeuZ0ZeLEQFWCVTmN2am9GxbM=; b=Ut0jhmHC/eLH0WUpxpuUdN0ZWD8+2exeI7XQVYH53ekhPGoYIcZYc+CCpvXbinUIdjNkkh tgYxryjZ/ztgK2oWLSzJO4y/KrGeOb00Mt1WT9gps+sqmqHubnRRfTc79Qt0gHBBpBpl2T UowAMeN6D2Psd9wIRVaMAuox7mKEUdw= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="SMp/7Pap"; spf=pass (imf08.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.8 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738826477; a=rsa-sha256; cv=none; b=oxlhbtxdDdnTUNkdDX7SDQYZ0kboujg5XWQj+piBg7nNCMVcaFAJQ18ZO6vV2O7J4mQMJr M8hJDjp2pDb5XIIRnNn/R9k3GW1gIGqZpv+wKjbekfRFawuL72zlED5AbQi7toP0RMEYSs Pkm7XuMF26Hj7HtWaZ1eJRD6Ttn3vas= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1738826477; x=1770362477; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hGMXf+St2ZchUBtwIHiThEldg2uqLPjq0XggSNACRI4=; b=SMp/7PapntHdA4oEXZdkwjsWNwLynY5jHflxPfS6NcYh71Dec1MG7A1c 51UQmsguVn+85aIw0iybDutEDDJwYG/+qpuLqmwMsth/jBtl/qgkANeaP sxrmPBawO/K+lmTgDQSVTE20/nZCINCvhUr4fPEBNvjYE4TAduwNstTQG eO7jZe1gESCqnXP5tUagIDJf9T9OjcDYFsSSDzN3Mo2vacZ8rHqn3PjAJ ncGJUBTQaidt/MbnRviapb34nNGccxe/hRTLB269K0JMiXvevThCr90z4 bYyi/gVAB1me5bquLMHmjIEOqfVO3Dd8/Ad5P+WE+37KgyGcuB9ZpM0dt g==; X-CSE-ConnectionGUID: aHrGr6M5S3OrnBE4NlVijQ== X-CSE-MsgGUID: 2FeaU/9US+KSrQBhvDpnQA== X-IronPort-AV: E=McAfee;i="6700,10204,11336"; a="56962741" X-IronPort-AV: E=Sophos;i="6.13,263,1732608000"; d="scan'208";a="56962741" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2025 23:21:06 -0800 X-CSE-ConnectionGUID: T5sTzomPTre7jUusWIkprA== X-CSE-MsgGUID: 3PROnXzOSNuq6V4KrEWVwQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="112022649" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by orviesa008.jf.intel.com with ESMTP; 05 Feb 2025 23:21:06 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, akpm@linux-foundation.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v6 15/16] mm: zswap: Compress batching with Intel IAA in zswap_store() of large folios. Date: Wed, 5 Feb 2025 23:21:01 -0800 Message-Id: <20250206072102.29045-16-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250206072102.29045-1-kanchana.p.sridhar@intel.com> References: <20250206072102.29045-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 0FE36160002 X-Stat-Signature: doxo3g3c9izt639ajqsafpzbhz8rpykz X-Rspam-User: X-HE-Tag: 1738826476-740714 X-HE-Meta: U2FsdGVkX19ngMxCYXlbh7HLLS6hTf3YPwCenadOusz6ZjJC0aKsuZ7nKyOi6MnL0Ow+u3K/CHdI6jak0IMWvwHUhWO/3383rL4+iz7uJ6XbPJsUOWZzBy7/am8sSWH6EXWh8lxdooizzahRwytVDY7kDpsla+WePFYJTrhQeGMq1kBdGbBO/cIBzZQy8+BuHcANLcxDQWf5zBrrM5WU6Ivw35a//x9eSi5pd7kv7kbWvqDjv4GX3bhY6VThwUxrYYU+Sv3pqn8cQ0tDWoif4UV0ZhbW1TwBy1q1V6SXpU56zrnSx369rEfnIbjnuD98yFPuadic9IBcmz7orGg6Dc9js9Wt/FWDu7TKVOW4nVUx0lC7MKMUqH2NhZga0hU48O7IDQHPrVYanA/DtIEz5B1ypqm9kjMSRFrvyeWE3Ri4AL+DBdmqpTrwrKtoQI0PjiDEOxB0uFzS3r2+zPErUg+tKmJK6g/BHW01MeHTkJZptjvNjxiyVIbNAoCuhjp0TL3uB3gsvd587J+I5UhSWT8K798atnb7nNYRccnPvJDglOMdMzeXyHnvJ+9DFVfkviMfisqi4mVtmwohDzUO8XHOGFj3MPbipPBDrq5VhVu+QB/fKix8ROLQ+ul9yNd6FkinOfZ4muTwAIGCH+jYHXQYC3vIfE78rZQLSdcjN7mWS2XCfqSXNz+1pa/1IeylhNut9/m2lBgLBIvlVCV34uUzbFv33sl7ydM+ZCkjldo2pqXF6HZH2rLUzYq7ACQN5brD6JE3PQigzsDCojkeRg9eV8Zmks9edqbfkAKQpQAxCMDPHLKcn7Ne8XsJ5qQOXGT6rrms37HeiGVzFFlzg1ANRROiYgNKxMrp3mrxhIwebq5YisQ0Ph01wJ4cyxE0jCqEbLC2/6wq5tzMUZONeNd4P9CWAR/o3heqyt8l18Zorm84Jav8FMdRSqAgC278YrHlEdPNvHqpCvhSbpp sTlCnH8u jll7OYA5XMap4z5Sf7bSMQa3GnWe4Jn0ALzpQR1GnE9Io9UvnBeEcL3dqDuODcG7EGVPGLCMOk5X6CP2AOPt1S1dzdaEw2jxWkeGqaGSMwsARUKR1a4PuIsSHkCoJ8HLfSz6emsAUdGsMG00rhHUa81T2ZWm7R4zlVVw2QAYJ0fjx/p5Nh3ZXYnl5YYdFa5yGl3eM3EvW7gH8FTB+dEIQTlCcxRBNf6aAYWFa X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: zswap_compress_folio() is modified to detect if the pool's acomp_ctx has more than one "nr_reqs", which will be the case if the cpu onlining code has allocated multiple batching resources in the acomp_ctx. If so, it means compress batching can be used with a batch-size of "acomp_ctx->nr_reqs". If compress batching can be used, zswap_compress_folio() will invoke the newly added zswap_batch_compress() procedure to compress and store the folio in batches of "acomp_ctx->nr_reqs" pages. With Intel IAA, the iaa_crypto driver will compress each batch of pages in parallel in hardware. Hence, zswap_batch_compress() does the same computes for a batch, as zswap_compress() does for a page; and returns true if the batch was successfully compressed/stored, and false otherwise. If the pool does not support compress batching, or the folio has only one page, zswap_compress_folio() calls zswap_compress() for each individual page in the folio, as before. Signed-off-by: Kanchana P Sridhar --- mm/zswap.c | 122 +++++++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 113 insertions(+), 9 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index 6563d12e907b..f1cba77eda62 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -985,10 +985,11 @@ static void acomp_ctx_put_unlock(struct crypto_acomp_ctx *acomp_ctx) mutex_unlock(&acomp_ctx->mutex); } +/* The per-cpu @acomp_ctx mutex should be locked/unlocked in the caller. */ static bool zswap_compress(struct page *page, struct zswap_entry *entry, - struct zswap_pool *pool) + struct zswap_pool *pool, + struct crypto_acomp_ctx *acomp_ctx) { - struct crypto_acomp_ctx *acomp_ctx; struct scatterlist input, output; int comp_ret = 0, alloc_ret = 0; unsigned int dlen = PAGE_SIZE; @@ -998,7 +999,6 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, gfp_t gfp; u8 *dst; - acomp_ctx = acomp_ctx_get_cpu_lock(pool); dst = acomp_ctx->buffers[0]; sg_init_table(&input, 1); sg_set_page(&input, page, PAGE_SIZE, 0); @@ -1051,7 +1051,6 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, else if (alloc_ret) zswap_reject_alloc_fail++; - acomp_ctx_put_unlock(acomp_ctx); return comp_ret == 0 && alloc_ret == 0; } @@ -1509,20 +1508,125 @@ static void shrink_worker(struct work_struct *w) * main API **********************************/ +/* The per-cpu @acomp_ctx mutex should be locked/unlocked in the caller. */ +static bool zswap_batch_compress(struct folio *folio, + long index, + unsigned int batch_size, + struct zswap_entry *entries[], + struct zswap_pool *pool, + struct crypto_acomp_ctx *acomp_ctx) +{ + int comp_errors[ZSWAP_MAX_BATCH_SIZE] = { 0 }; + unsigned int dlens[ZSWAP_MAX_BATCH_SIZE]; + struct page *pages[ZSWAP_MAX_BATCH_SIZE]; + unsigned int i, nr_batch_pages; + bool ret = true; + + nr_batch_pages = min((unsigned int)(folio_nr_pages(folio) - index), batch_size); + + for (i = 0; i < nr_batch_pages; ++i) { + pages[i] = folio_page(folio, index + i); + dlens[i] = PAGE_SIZE; + } + + /* + * Batch compress @nr_batch_pages. If IAA is the compressor, the + * hardware will compress @nr_batch_pages in parallel. + */ + ret = crypto_acomp_batch_compress( + acomp_ctx->reqs, + NULL, + pages, + acomp_ctx->buffers, + dlens, + comp_errors, + nr_batch_pages); + + if (ret) { + /* + * All batch pages were successfully compressed. + * Store the pages in zpool. + */ + struct zpool *zpool = pool->zpool; + gfp_t gfp = __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM; + + if (zpool_malloc_support_movable(zpool)) + gfp |= __GFP_HIGHMEM | __GFP_MOVABLE; + + for (i = 0; i < nr_batch_pages; ++i) { + unsigned long handle; + char *buf; + int err; + + err = zpool_malloc(zpool, dlens[i], gfp, &handle); + + if (err) { + if (err == -ENOSPC) + zswap_reject_compress_poor++; + else + zswap_reject_alloc_fail++; + + ret = false; + break; + } + + buf = zpool_map_handle(zpool, handle, ZPOOL_MM_WO); + memcpy(buf, acomp_ctx->buffers[i], dlens[i]); + zpool_unmap_handle(zpool, handle); + + entries[i]->handle = handle; + entries[i]->length = dlens[i]; + } + } else { + /* Some batch pages had compression errors. */ + for (i = 0; i < nr_batch_pages; ++i) { + if (comp_errors[i]) { + if (comp_errors[i] == -ENOSPC) + zswap_reject_compress_poor++; + else + zswap_reject_compress_fail++; + } + } + } + + return ret; +} + static bool zswap_compress_folio(struct folio *folio, struct zswap_entry *entries[], struct zswap_pool *pool) { long index, nr_pages = folio_nr_pages(folio); + struct crypto_acomp_ctx *acomp_ctx; + unsigned int batch_size; + bool ret = true; - for (index = 0; index < nr_pages; ++index) { - struct page *page = folio_page(folio, index); + acomp_ctx = acomp_ctx_get_cpu_lock(pool); + batch_size = acomp_ctx->nr_reqs; + + if ((batch_size > 1) && (nr_pages > 1)) { + for (index = 0; index < nr_pages; index += batch_size) { + + if (!zswap_batch_compress(folio, index, batch_size, + &entries[index], pool, acomp_ctx)) { + ret = false; + goto unlock_acomp_ctx; + } + } + } else { + for (index = 0; index < nr_pages; ++index) { + struct page *page = folio_page(folio, index); - if (!zswap_compress(page, entries[index], pool)) - return false; + if (!zswap_compress(page, entries[index], pool, acomp_ctx)) { + ret = false; + goto unlock_acomp_ctx; + } + } } - return true; +unlock_acomp_ctx: + acomp_ctx_put_unlock(acomp_ctx); + return ret; } /*