From patchwork Sat Sep 28 02:16:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13814601 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7C5ECF6493 for ; Sat, 28 Sep 2024 02:16:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7B5E76B0168; Fri, 27 Sep 2024 22:16:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7642D6B0169; Fri, 27 Sep 2024 22:16:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 544096B016A; Fri, 27 Sep 2024 22:16:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 2F6BE6B0168 for ; Fri, 27 Sep 2024 22:16:33 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id D80DD41703 for ; Sat, 28 Sep 2024 02:16:32 +0000 (UTC) X-FDA: 82612533024.23.AD082D4 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by imf28.hostedemail.com (Postfix) with ESMTP id B91A1C0004 for ; Sat, 28 Sep 2024 02:16:30 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=D03aT4iC; spf=pass (imf28.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.13 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727489728; a=rsa-sha256; cv=none; b=Ggijlz4IIqgLFOt1XXRfH+WWgTI8YrEEI+zyi7PDv2XmEcrnUq60EDFTsBYoEaLqMMNeyY Kd2pHiFEnLsMQxU2XPQp2a6mIItAZijk9Potb4lrA00tc9oPuiT3AqEKtAeCDMPHWG6S55 Wzgmus71fyua47IbmSqinmoW7T5GzNY= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=D03aT4iC; spf=pass (imf28.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.13 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727489728; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2Ci/8+s23r9WnT8yoxOhHXnQj61WPlo+0s62A2xF7Go=; b=Aa3j0vPZ1Lt2/iyyNSVtawFObftzILuCQ+b3gkCx/2ax3blqPqQapSiRpra0JcCJnkarUa bxHgxyCFXTPdQTQeNk1Wcd9uEKpPJg2dTd7ebHsPsqGO3tXYfjkdb9eI2xAec9g+fkGNed qJDx5AIv1H7UkVaZaxr2/n7iJyfjfkk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1727489790; x=1759025790; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kXk6Ax7R4cbxIDcNNxyrrGN3X0TYFfUEAYtvllXSW9Y=; b=D03aT4iCAhurrTBpRqOzMaoO1R7W+ucGAeeREUxTrXSl6IaiwaDWc/12 VF4/6YF3dcb23LCSIBvOCekOv8eCH9Z4DX40to/rqQ8bv9DU6aCZAGC/l hWVcRm3X1plTGI89zpi0eIJfybRM6Ut9awQo5ByHhzZ+kWMwInwsOm9cL z3Pk8udw9FnsHbIawjv8agqXdWrX2pN1/th8Lw+x7QgFiKunXpJlsT4VR TENVQB4GubaRiM/DpJxqATUMZNQYng0eyUpqqTcnu5QpE9V681WfSGxTt vL+1jiy57aqq/VdDw62IVWAcA8K3G568OFmBeBYEvMbXT6HFSVtio+Jng Q==; X-CSE-ConnectionGUID: Afnv0AbyRnikMvOTiti7pg== X-CSE-MsgGUID: ctK8nAMnSmqJ0Gn8qQ17NA== X-IronPort-AV: E=McAfee;i="6700,10204,11208"; a="29526904" X-IronPort-AV: E=Sophos;i="6.11,160,1725346800"; d="scan'208";a="29526904" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Sep 2024 19:16:21 -0700 X-CSE-ConnectionGUID: 4AIpJFJWT46k/1SWK+nriw== X-CSE-MsgGUID: JE3lrKiySlCTp6DoWlx+Ww== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,160,1725346800"; d="scan'208";a="73507130" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.6]) by orviesa008.jf.intel.com with ESMTP; 27 Sep 2024 19:16:22 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, shakeel.butt@linux.dev, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org Cc: nanhai.zou@intel.com, wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v8 6/8] mm: zswap: Support large folios in zswap_store(). Date: Fri, 27 Sep 2024 19:16:18 -0700 Message-Id: <20240928021620.8369-7-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240928021620.8369-1-kanchana.p.sridhar@intel.com> References: <20240928021620.8369-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Stat-Signature: pusu8yqumzritsznzmoj6k8ki8q3oboo X-Rspamd-Queue-Id: B91A1C0004 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1727489790-231504 X-HE-Meta: U2FsdGVkX1+Xcu5kvmoI41V2rQRq0LNYm/nd0dr4GnT3B/FGdFbZft8bJ2URDMgjlFo3PRcnTeG0ZsiK7uRcdv6zyesSqJ0v3PxXKXx82p0MDkJoz9g0spj6jyBX3orjingg8snvjOWWzaTQRqTQEMJTEO2Ve6Iw5Jo7xCoRlL7T9Vqjt/XOEb3L+Bx+OX2/3G3JT9Moh23MDoFxpFSWRVzdzU0KE/iwVAt6jqtf/tSyCkBiI7BU5F7oeFcG5SDSw6lZC+SjFpVlGQpA7S7/rS+gMzXOo2dSUUigT+KwaFUzKoQWYZThmJLbbeVoQLJTOrGSM6Qyf6zk3e1D+XO9KjwfzJfwDmfzARpdG86LVr2M6mNvByXg+Oo7z5stiPzJygnTCqsxyocLDtCbigDtxOKrpzNqr0mE3dD4VCLPbygb7Hpgde9nxZKKnz2q2U5atdiZ0IrwjKUGGJwhOhtjHay6r8+yeTWwPFvBswj35iNARS5fxbGYAzmxXLC0IhfZmHqKB6Md90Eq0B2kRDpqrriEuYGynN6jL74MWrsYKFnjusFGbfN0QgoWSalvXxFhcv5wnQz9FyntaRpodqmUOoahkWit/DHHrrtO5iD5Sdm9QhH55160oH9F97TeKRkwd9AbdHsp0hj8rqeLymHCDVw+oY9ehgLOKlRBCDaMNJwEQA4oCxFi0D/gqoIXgOt7HU+YIzYb+vfynyu9fTkVwwcJFsBBk76b43eHvWx9Zg4F84nFgN9wCEdLFf1m4NfH127Y3Sd/Lg0SoNKerfoEW+0qc2tppWDCsY0zjpNbXgeGs7IC34ponJPkw4gflBwpQ3S4/TAfP5d5KNXkw5JyQxYjvqXVTChxmyhHJUFdqgqzLloE6bwUOIKndNdPLS+BIewC5N8RK8CUtltDbAR9RmpNG543r4WXtPe2gs7utd8qcpgSqLyr1TBHu1y+jBs684GAoCrNstUBu/sPGmK TTbIhYn6 0dE+eskJVZzHlziGItsKzoSYIQyT2jBNrJZrJmhzY/zY4WhAD6q3rScngpDw/jK/a7BFjWW/LSsDzKZUKQw+Qn/Mkpo2ghhSlmb+RDA/Gtw2I5gJwCo/swybszvlKjVrKbuIb0Uvj116iliod2NkHGLN/LxYtiKl4V5XeqALPe22FoTmgFSP2xUsJwUstJJTfLxUHvZ5kSRJNbrwVgM8emsmmeALR2gkk4gnv0cU3g6nXxlY2J+5qHMZC/e2DYI2lTWFN/y4wq9MpQsqMQa7NBViR5YUPqTxI4skubptSzd6nYYbaipeHl7kuRA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: zswap_store() will store large folios by compressing them page by page. This patch provides a sequential implementation of storing a large folio in zswap_store() by iterating through each page in the folio to compress and store it in the zswap zpool. Towards this goal, zswap_compress() is modified to take a page instead of a folio as input. Each page's swap offset is stored as a separate zswap entry. We check the cgroup zswap limit and the zpool utilization against the zswap max/accept_threshold limits once, at the beginning of zswap_store. We also obtain a percpu_ref_tryget() reference to the current zswap_pool at the start of zswap_store to prevent it from being deleted while the folio is being stored. If these one-time checks pass, we compress the sub-pages of the folio, while maintaining a running count of compressed bytes for all the folio's sub-pages. If all pages are successfully compressed and stored, we do the cgroup zswap charging with the total compressed bytes, and batch update the zswap_stored_pages atomic/zswpout event stats with folio_nr_pages() once, before returning from zswap_store. The patch adds a new zswap_pool_get() function that is called in the sub-page level "zswap_store_page()" function. If an error is encountered during the store of any page in the folio, all pages in that folio currently stored in zswap will be invalidated. Thus, a folio is either entirely stored in zswap, or entirely not stored in zswap. This patch forms the basis for building compress batching of pages in a large folio in zswap_store by compressing up to say, 8 pages of the folio in parallel in hardware using the Intel In-Memory Analytics Accelerator (Intel IAA). This change reuses and adapts the functionality in Ryan Roberts' RFC patch [1]: "[RFC,v1] mm: zswap: Store large folios without splitting" [1] https://lore.kernel.org/linux-mm/20231019110543.3284654-1-ryan.roberts@arm.com/T/#u Also, addressed some of the RFC comments from the discussion in [1]. Co-developed-by: Ryan Roberts Signed-off-by: Signed-off-by: Kanchana P Sridhar --- mm/zswap.c | 227 ++++++++++++++++++++++++++++++++++++++--------------- 1 file changed, 165 insertions(+), 62 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index 43e4e216db41..b8395ddf7b7c 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -411,6 +411,16 @@ static int __must_check zswap_pool_tryget(struct zswap_pool *pool) return percpu_ref_tryget(&pool->ref); } +/* + * Note: zswap_pool_get() should only be called after zswap_pool_tryget() + * returns success. zswap_pool_tryget() returns success only if the "pool" is + * non-NULL and the "&pool->ref" is non-0. + */ +static void zswap_pool_get(struct zswap_pool *pool) +{ + percpu_ref_get(&pool->ref); +} + static void zswap_pool_put(struct zswap_pool *pool) { percpu_ref_put(&pool->ref); @@ -1402,38 +1412,35 @@ static void shrink_worker(struct work_struct *w) /********************************* * main API **********************************/ -bool zswap_store(struct folio *folio) + +/* + * Stores the page at specified "index" in a folio. + * + * @folio: The folio to store in zswap. + * @index: Index into the page in the folio that this function will store. + * @objcg: The folio's objcg. + * @pool: The zswap_pool to store the compressed data for the page. + * The caller should have obtained a reference to a valid + * zswap_pool by calling zswap_pool_tryget(), to pass as this + * argument. + * @compressed_bytes: The compressed entry->length value is added + * to this, so that the caller can get the total + * compressed lengths of all sub-pages in a folio. + */ +static bool zswap_store_page(struct folio *folio, long index, + struct obj_cgroup *objcg, + struct zswap_pool *pool, + size_t *compressed_bytes) { + struct page *page = folio_page(folio, index); swp_entry_t swp = folio->swap; - pgoff_t offset = swp_offset(swp); struct xarray *tree = swap_zswap_tree(swp); + pgoff_t offset = swp_offset(swp) + index; struct zswap_entry *entry, *old; - struct obj_cgroup *objcg = NULL; - struct mem_cgroup *memcg = NULL; - - VM_WARN_ON_ONCE(!folio_test_locked(folio)); - VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); + int type = swp_type(swp); - /* Large folios aren't supported */ - if (folio_test_large(folio)) - return false; - - if (!zswap_enabled) - goto check_old; - - /* Check cgroup limits */ - objcg = get_obj_cgroup_from_folio(folio); - if (objcg && !obj_cgroup_may_zswap(objcg)) { - memcg = get_mem_cgroup_from_objcg(objcg); - if (shrink_memcg(memcg)) { - mem_cgroup_put(memcg); - goto reject; - } - mem_cgroup_put(memcg); - } - - if (zswap_check_limits()) - goto reject; + if (objcg) + obj_cgroup_get(objcg); /* allocate entry */ entry = zswap_entry_cache_alloc(GFP_KERNEL, folio_nid(folio)); @@ -1442,24 +1449,21 @@ bool zswap_store(struct folio *folio) goto reject; } - /* if entry is successfully added, it keeps the reference */ - entry->pool = zswap_pool_current_get(); - if (!entry->pool) - goto freepage; + /* + * We get here only after the call to zswap_pool_tryget() in the + * caller, zswap_store(), has returned success. Hence it is safe + * to call zswap_pool_get(). + * + * if entry is successfully added, it keeps the reference + */ + zswap_pool_get(pool); - if (objcg) { - memcg = get_mem_cgroup_from_objcg(objcg); - if (memcg_list_lru_alloc(memcg, &zswap_list_lru, GFP_KERNEL)) { - mem_cgroup_put(memcg); - goto put_pool; - } - mem_cgroup_put(memcg); - } + entry->pool = pool; - if (!zswap_compress(&folio->page, entry)) + if (!zswap_compress(page, entry)) goto put_pool; - entry->swpentry = swp; + entry->swpentry = swp_entry(type, offset); entry->objcg = objcg; entry->referenced = true; @@ -1480,11 +1484,6 @@ bool zswap_store(struct folio *folio) if (old) zswap_entry_free(old); - if (objcg) { - obj_cgroup_charge_zswap(objcg, entry->length); - count_objcg_event(objcg, ZSWPOUT); - } - /* * We finish initializing the entry while it's already in xarray. * This is safe because: @@ -1496,36 +1495,140 @@ bool zswap_store(struct folio *folio) * an incoherent entry. */ if (entry->length) { + *compressed_bytes += entry->length; INIT_LIST_HEAD(&entry->lru); zswap_lru_add(&zswap_list_lru, entry); } - /* update stats */ - atomic_long_inc(&zswap_stored_pages); - count_vm_event(ZSWPOUT); - return true; store_failed: zpool_free(entry->pool->zpool, entry->handle); put_pool: - zswap_pool_put(entry->pool); -freepage: + zswap_pool_put(pool); zswap_entry_cache_free(entry); reject: obj_cgroup_put(objcg); - if (zswap_pool_reached_full) - queue_work(shrink_wq, &zswap_shrink_work); -check_old: + return false; +} + +bool zswap_store(struct folio *folio) +{ + long nr_pages = folio_nr_pages(folio); + swp_entry_t swp = folio->swap; + struct xarray *tree = swap_zswap_tree(swp); + pgoff_t offset = swp_offset(swp); + struct obj_cgroup *objcg = NULL; + struct mem_cgroup *memcg = NULL; + struct zswap_pool *pool; + size_t compressed_bytes = 0; + bool ret = false; + long index; + + VM_WARN_ON_ONCE(!folio_test_locked(folio)); + VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); + + if (!zswap_enabled) + goto reject; + /* - * If the zswap store fails or zswap is disabled, we must invalidate the - * possibly stale entry which was previously stored at this offset. - * Otherwise, writeback could overwrite the new data in the swapfile. + * Check cgroup zswap limits: + * + * The cgroup zswap limit check is done once at the beginning of + * zswap_store(). The cgroup charging is done once, at the end + * of a successful folio store. What this means is, if the cgroup + * was within the zswap_max limit at the beginning of a large folio + * store, it could go over the limit by at most (HPAGE_PMD_NR - 1) + * pages. */ - entry = xa_erase(tree, offset); - if (entry) - zswap_entry_free(entry); - return false; + objcg = get_obj_cgroup_from_folio(folio); + if (objcg && !obj_cgroup_may_zswap(objcg)) { + memcg = get_mem_cgroup_from_objcg(objcg); + if (shrink_memcg(memcg)) { + mem_cgroup_put(memcg); + goto put_objcg; + } + mem_cgroup_put(memcg); + } + + /* + * Check zpool utilization against zswap limits: + * + * The zswap zpool utilization is also checked against the limits + * just once, at the start of zswap_store(). If the check passes, + * any breaches of the limits set by zswap_max_pages() or + * zswap_accept_thr_pages() that may happen while storing this + * folio, will only be detected during the next call to + * zswap_store() by any process. + */ + if (zswap_check_limits()) + goto put_objcg; + + pool = zswap_pool_current_get(); + if (!pool) + goto put_objcg; + + if (objcg) { + memcg = get_mem_cgroup_from_objcg(objcg); + if (memcg_list_lru_alloc(memcg, &zswap_list_lru, GFP_KERNEL)) { + mem_cgroup_put(memcg); + goto put_pool; + } + mem_cgroup_put(memcg); + } + + /* + * Store each page of the folio as a separate entry. If we fail to + * store a page, unwind by deleting all the pages for this folio + * currently in zswap. + */ + for (index = 0; index < nr_pages; ++index) { + if (!zswap_store_page(folio, index, objcg, pool, &compressed_bytes)) + goto put_pool; + } + + /* + * All pages in the folio have been successfully stored. + * Batch update the cgroup zswap charging, zswap_stored_page atomic, + * and ZSWPOUT event stats. + */ + if (objcg) { + obj_cgroup_charge_zswap(objcg, compressed_bytes); + count_objcg_events(objcg, ZSWPOUT, nr_pages); + } + + /* update stats */ + atomic_long_add(nr_pages, &zswap_stored_pages); + count_vm_events(ZSWPOUT, nr_pages); + + ret = true; + +put_pool: + zswap_pool_put(pool); +put_objcg: + obj_cgroup_put(objcg); +reject: + /* + * If the zswap store fails or zswap is disabled, we must invalidate + * the possibly stale entries which were previously stored at the + * offsets corresponding to each page of the folio. Otherwise, + * writeback could overwrite the new data in the swapfile. + */ + if (!ret) { + struct zswap_entry *entry; + long i; + + for (i = 0; i < nr_pages; ++i) { + entry = xa_erase(tree, offset + i); + if (entry) + zswap_entry_free(entry); + } + + if (zswap_pool_reached_full) + queue_work(shrink_wq, &zswap_shrink_work); + } + + return ret; } bool zswap_load(struct folio *folio)