From patchwork Wed Nov 27 22:53:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 13887410 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1743D6D247 for ; Wed, 27 Nov 2024 22:53:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 548956B0083; Wed, 27 Nov 2024 17:53:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A9EB6B0085; Wed, 27 Nov 2024 17:53:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2FD246B0088; Wed, 27 Nov 2024 17:53:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 082A96B0083 for ; Wed, 27 Nov 2024 17:53:33 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B1B081C8337 for ; Wed, 27 Nov 2024 22:53:32 +0000 (UTC) X-FDA: 82833377886.09.E0CAEE8 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) by imf04.hostedemail.com (Postfix) with ESMTP id 278CD40003 for ; Wed, 27 Nov 2024 22:53:23 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=cOPfPGCG; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf04.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.17 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732748004; a=rsa-sha256; cv=none; b=TwZXMsz+xA9wvYBOPvksEhBrlRM76EXDd633dCvhsRZHM4vsT+GwGgnbPKZV02rizPDOn0 JwY6dCxDvueHKKPnsY+ZUh2nJzx0e+YxTElpBcC8XYaMPQp5dvTS0peKWkyoc1D+Q9lSuF MUJnGlyHief51UWYmsX0PK/ttZmxfRI= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=cOPfPGCG; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf04.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.17 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732748004; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=I+VMizOPkq1PvL/j6g1VvO5fRPvAU2GcuaHqVBMmVcc=; b=c0dO7Lo00SlTAcyo2AqkA1XPP6leDHKoAKbT0m7cWxD7Y5BvnxFwgkyLJ/x2DtFZ1g+pK2 bU/WWhhKSSSg4EgzK4QcSxezwLJJ9gp4Mzchd2RuE0MfBLWTNSZ0fi8rWsJOh9EJTPFAZp TcADyKCaBINQ0+nHv021Ai4dhFE1LuE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1732748011; x=1764284011; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=v3LlDsATLJ9yWYXEOtbAuM5Fnc0kQ/M+aHEO4nE6tzE=; b=cOPfPGCGYYthmfmAyMhJPTCbHsXOJ8yADq1mBlsXkqsjeKoG6lrrD894 VLWOqDD1iTNt3UNUB6V66cnt2LX4xyTzKuOFf8HoCgOOEEtSmoTfbRQKB Hqj0gCVm7+CobyAYLzvBTaQPb2uVU8qoInFwDXk941Edx1G3sU1Ka5zu4 TjmrTc9m5fDBdRLImVCQWGcJP8waNAghvNvQpFX9hU1M84FpBTxHF8cZ5 3h/bnyi/LuMuFlj8obYclW3lkiH2DHHIP1xyLO8nelgzYkKxlW4oH5put qv80B7Tg8NIaah7b1mQ8xZjBe59Gsg6EGhoUPAp+PzyHYMBW21c7FmLla A==; X-CSE-ConnectionGUID: 4PBKCPIdSzGfbCurxywclA== X-CSE-MsgGUID: 9zIxzDf9SJOh9HnVRV+K4A== X-IronPort-AV: E=McAfee;i="6700,10204,11269"; a="33022408" X-IronPort-AV: E=Sophos;i="6.12,190,1728975600"; d="scan'208";a="33022408" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Nov 2024 14:53:26 -0800 X-CSE-ConnectionGUID: JICzFf11QY6j60/+U18wxw== X-CSE-MsgGUID: 34/6rIBPRY2GdRkmAkuBuA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,190,1728975600"; d="scan'208";a="92235434" Received: from unknown (HELO JF5300-B11A338T.jf.intel.com) ([10.242.51.115]) by fmviesa008.fm.intel.com with ESMTP; 27 Nov 2024 14:53:25 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, akpm@linux-foundation.org Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v1 1/2] mm: zswap: Modified zswap_store_page() to process multiple pages in a folio. Date: Wed, 27 Nov 2024 14:53:23 -0800 Message-Id: <20241127225324.6770-2-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241127225324.6770-1-kanchana.p.sridhar@intel.com> References: <20241127225324.6770-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 278CD40003 X-Stat-Signature: kb3yp5j11znwwnctszboao31nqj5taqq X-Rspam-User: X-HE-Tag: 1732748003-564810 X-HE-Meta: U2FsdGVkX19l1xaoGEbQglfEMmAB5YG7wdvi0Ex0+VfcsyVllr+fja3VOZrP5xw+kTsdkdxRI/IkGHtgdww23ABovgMQ0Ga51lfi7QIKHCDFP2ZBSecD8GyUQeBSWhTX81yC5juLLFz/HmhVnW+dSNqHS5Ts9iJaZulVGIqUKsW87fkyBmXvD3VcoU2zRTFZX1YrWK/B8RhtfyF6JLNJtzs5IMuoNsvChapM2GSwaP0O1eaLfYTKWSjLHfcRoMZ2x2/S1RfcpuHFrmFMnhTG9xACfDTCFe4tN8ZCDY/7b7i5NbhwNkDaaI71EHbrFAjZnaQNU12esEQs7ib8EHnZIY0ApTQW4o/wsGnTkMSqlvBBTAs24YHS2e/0qptH2LJ6tc7HHFgBwPjpVcMdCh3rhPjfadOF+l2tfNJqgmJsgGG3nzhAqZUtyNClnRauhLQkp+QjDKES8nYcB5VsM+j1KrTim22LL6YihOyt5T2kZPmj2cuh8z1VNpBbaXk1PH4Bn74uWarevQ85o5ZYWmvbTU7mdA5oRiNGHbRlgID9RQfJYDec4U3Ky+7u0EvZXw6/rHL1V9QtSTwkFQ6IVGZXXNLWVgk3OchUP+qb9PVk04FdyCv5ySnI79Apaf0nQpUd4nofkIuxEuY7HvePzJ0+a0/bPFA5/pttneVl84TitcwUxSARPcyZglKRSQZlO5ryxG3mZwcUf9MRVJs3VKVRzN317ug9Mel+52NvW9xgncXxbtIo0WW8JQTNXciM9iWVlRuhL/4WSGYvnsWzOdCz+zigagUWL0HDrx5eZdnh6UTF0zLdVYc0bs2C54sBD0f2xe6dR1FQQKD81Uw83EM+JF0n+tv7Hax1jqGwUTe7MpGxy1VSGHH0/4Xx2OoiMgq/wqVh7zyFbGlB7Dt6/crH/tgj9z9Mhx/h27+lGfDGhXviNM2pvOTKwKF+ooYZwGTXvpPNaI6fvrB9XafxiqE /NnpjG+n J7nKa7pEIbyHEm6yygI2FiBeVQUHH/de6Iq3NvpRHZkIyOGb6syoY9/Lru5ykAD6KiHSCIzcQFwoG9S/o84SP31H9JJJu4yQqJt5ishE6VdFVwcBTBLEITsDHpQUhb7zFC8ymNAF7bVctueylKro89K9WVCa/rkkPluWPydcbhinvN86SQJFs2JoeL2Y4F9t0EX8iAQN1CSdpnWgUbByWGbtAF73GIOIfTcdTjOTOsQ9KjxTmhj1zPmMxqS+Qhkgt7SNz X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Modified zswap_store() to store the folio in batches of SWAP_CRYPTO_BATCH_SIZE pages. Accordingly, refactored zswap_store_page() into zswap_store_pages() that processes a range of pages in the folio. zswap_store_pages() is a vectorized version of zswap_store_page(). For now, zswap_store_pages() will sequentially compress these pages with zswap_compress(). These changes are follow-up to code review comments received for [1], and are intended to set up zswap_store() for batching with Intel IAA. [1]: https://patchwork.kernel.org/project/linux-mm/patch/20241123070127.332773-11-kanchana.p.sridhar@intel.com/ Signed-off-by: Kanchana P Sridhar --- include/linux/zswap.h | 1 + mm/zswap.c | 154 ++++++++++++++++++++++++------------------ 2 files changed, 88 insertions(+), 67 deletions(-) diff --git a/include/linux/zswap.h b/include/linux/zswap.h index d961ead91bf1..05a81e750744 100644 --- a/include/linux/zswap.h +++ b/include/linux/zswap.h @@ -7,6 +7,7 @@ struct lruvec; +#define SWAP_CRYPTO_BATCH_SIZE 8UL extern atomic_long_t zswap_stored_pages; #ifdef CONFIG_ZSWAP diff --git a/mm/zswap.c b/mm/zswap.c index f6316b66fb23..b09d1023e775 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1409,78 +1409,96 @@ static void shrink_worker(struct work_struct *w) * main API **********************************/ -static ssize_t zswap_store_page(struct page *page, - struct obj_cgroup *objcg, - struct zswap_pool *pool) +/* + * Store multiple pages in @folio, starting from the page at index @si up to + * and including the page at index @ei. + */ +static ssize_t zswap_store_pages(struct folio *folio, + long si, + long ei, + struct obj_cgroup *objcg, + struct zswap_pool *pool) { - swp_entry_t page_swpentry = page_swap_entry(page); + struct page *page; + swp_entry_t page_swpentry; struct zswap_entry *entry, *old; + size_t compressed_bytes = 0; + u8 nr_pages = ei - si + 1; + u8 i; + + for (i = 0; i < nr_pages; ++i) { + page = folio_page(folio, si + i); + page_swpentry = page_swap_entry(page); + + /* allocate entry */ + entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page)); + if (!entry) { + zswap_reject_kmemcache_fail++; + return -EINVAL; + } - /* allocate entry */ - entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page)); - if (!entry) { - zswap_reject_kmemcache_fail++; - return -EINVAL; - } - - if (!zswap_compress(page, entry, pool)) - goto compress_failed; + if (!zswap_compress(page, entry, pool)) + goto compress_failed; - old = xa_store(swap_zswap_tree(page_swpentry), - swp_offset(page_swpentry), - entry, GFP_KERNEL); - if (xa_is_err(old)) { - int err = xa_err(old); + old = xa_store(swap_zswap_tree(page_swpentry), + swp_offset(page_swpentry), + entry, GFP_KERNEL); + if (xa_is_err(old)) { + int err = xa_err(old); - WARN_ONCE(err != -ENOMEM, "unexpected xarray error: %d\n", err); - zswap_reject_alloc_fail++; - goto store_failed; - } + WARN_ONCE(err != -ENOMEM, "unexpected xarray error: %d\n", err); + zswap_reject_alloc_fail++; + goto store_failed; + } - /* - * We may have had an existing entry that became stale when - * the folio was redirtied and now the new version is being - * swapped out. Get rid of the old. - */ - if (old) - zswap_entry_free(old); + /* + * We may have had an existing entry that became stale when + * the folio was redirtied and now the new version is being + * swapped out. Get rid of the old. + */ + if (old) + zswap_entry_free(old); - /* - * The entry is successfully compressed and stored in the tree, there is - * no further possibility of failure. Grab refs to the pool and objcg. - * These refs will be dropped by zswap_entry_free() when the entry is - * removed from the tree. - */ - zswap_pool_get(pool); - if (objcg) - obj_cgroup_get(objcg); + /* + * The entry is successfully compressed and stored in the tree, there is + * no further possibility of failure. Grab refs to the pool and objcg. + * These refs will be dropped by zswap_entry_free() when the entry is + * removed from the tree. + */ + zswap_pool_get(pool); + if (objcg) + obj_cgroup_get(objcg); - /* - * We finish initializing the entry while it's already in xarray. - * This is safe because: - * - * 1. Concurrent stores and invalidations are excluded by folio lock. - * - * 2. Writeback is excluded by the entry not being on the LRU yet. - * The publishing order matters to prevent writeback from seeing - * an incoherent entry. - */ - entry->pool = pool; - entry->swpentry = page_swpentry; - entry->objcg = objcg; - entry->referenced = true; - if (entry->length) { - INIT_LIST_HEAD(&entry->lru); - zswap_lru_add(&zswap_list_lru, entry); - } + /* + * We finish initializing the entry while it's already in xarray. + * This is safe because: + * + * 1. Concurrent stores and invalidations are excluded by folio lock. + * + * 2. Writeback is excluded by the entry not being on the LRU yet. + * The publishing order matters to prevent writeback from seeing + * an incoherent entry. + */ + entry->pool = pool; + entry->swpentry = page_swpentry; + entry->objcg = objcg; + entry->referenced = true; + if (entry->length) { + INIT_LIST_HEAD(&entry->lru); + zswap_lru_add(&zswap_list_lru, entry); + } - return entry->length; + compressed_bytes += entry->length; + continue; store_failed: - zpool_free(pool->zpool, entry->handle); + zpool_free(pool->zpool, entry->handle); compress_failed: - zswap_entry_cache_free(entry); - return -EINVAL; + zswap_entry_cache_free(entry); + return -EINVAL; + } + + return compressed_bytes; } bool zswap_store(struct folio *folio) @@ -1492,7 +1510,7 @@ bool zswap_store(struct folio *folio) struct zswap_pool *pool; size_t compressed_bytes = 0; bool ret = false; - long index; + long si, ei, incr = SWAP_CRYPTO_BATCH_SIZE; VM_WARN_ON_ONCE(!folio_test_locked(folio)); VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); @@ -1526,11 +1544,13 @@ bool zswap_store(struct folio *folio) mem_cgroup_put(memcg); } - for (index = 0; index < nr_pages; ++index) { - struct page *page = folio_page(folio, index); + /* Store the folio in batches of SWAP_CRYPTO_BATCH_SIZE pages. */ + for (si = 0, ei = min(si + incr - 1, nr_pages - 1); + ((si < nr_pages) && (ei < nr_pages)); + si = ei + 1, ei = min(si + incr - 1, nr_pages - 1)) { ssize_t bytes; - bytes = zswap_store_page(page, objcg, pool); + bytes = zswap_store_pages(folio, si, ei, objcg, pool); if (bytes < 0) goto put_pool; compressed_bytes += bytes; @@ -1565,9 +1585,9 @@ bool zswap_store(struct folio *folio) struct zswap_entry *entry; struct xarray *tree; - for (index = 0; index < nr_pages; ++index) { - tree = swap_zswap_tree(swp_entry(type, offset + index)); - entry = xa_erase(tree, offset + index); + for (si = 0; si < nr_pages; ++si) { + tree = swap_zswap_tree(swp_entry(type, offset + si)); + entry = xa_erase(tree, offset + si); if (entry) zswap_entry_free(entry); }