From patchwork Tue Sep 24 01:17:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13810018 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDFEDCF9C73 for ; Tue, 24 Sep 2024 01:17:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 387356B008A; Mon, 23 Sep 2024 21:17:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3397C6B0092; Mon, 23 Sep 2024 21:17:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 033966B0093; Mon, 23 Sep 2024 21:17:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id C442F6B008A for ; Mon, 23 Sep 2024 21:17:21 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 79D4216145B for ; Tue, 24 Sep 2024 01:17:21 +0000 (UTC) X-FDA: 82597868682.14.A56A9F5 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by imf03.hostedemail.com (Postfix) with ESMTP id 5C9732000E for ; Tue, 24 Sep 2024 01:17:19 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=nZbaL8q9; spf=pass (imf03.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727140520; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mHMd+SqdnpxAzHVAFfwrphaSBViVXbwMZjo7qjKq8N8=; b=kasBuCqCyHxUCMCZs2HE+ECSHPCXZqsdon0EFKBlPGEO3Ou0PUDbAnBUId6DpGTa2gcYLg AxEgTQ2xTl8xM1J/GNVEsb6y3Ke1Uh0ILIRMlF+R6uzFpMf/7ULDALcyTvj+cEepW6i3dX vNkQ4488lzJOBOMSI1n34sSCP5Zkv4Y= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727140520; a=rsa-sha256; cv=none; b=vh0VIz8sYTOyz7IB6gRsi7q16nMnrgXEC6TXhhVtw8MP/sH7Hhtu/ejmUMZFvOzh9YVfZP lD2tp6mAjH3QL7T9UAaKQLFSrHR1Mg/kOmOB7nn09jAwtcN+tTEnqfsbxjX0lmMZ9lO10s +iZiJR3Gimc0hi+uciCYfzkPWaGN5vw= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=nZbaL8q9; spf=pass (imf03.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1727140639; x=1758676639; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2mNeQTWJUlWcoSNPRsAXaDiBYPB5nqJS0eAJPgQB/Sk=; b=nZbaL8q9n+knCf+n0mbYe40FuEb3V/kVX+kkpGiT4USh4vl2AoNgJkdD guYzOx9YqaG13tfONv0FBuYdmGKC+xiXmxkwr/PN6QoMB2H9Gp3rAGTz/ xfH6PuCz7PKlA2Iq2t1DE28KdqhQusRh5zKLy+GotRFlx0MPfJgiuW+on CUEAD1Lfu1K7BP1dCPXPMnxvCzetUdx6315Pbt5lKdAcIOJDT+Iw9h4yT uVzq1+Z8ykooWDW1T8dpd4e/uHKnm6b35UatCRcFHZUSLj3D+YnU20fAB CUA6VnhXXUXn/jvHCuSV0/Adg19u5OAw8M13FwQOi3Xuuk3FYErSy3RPt A==; X-CSE-ConnectionGUID: toTHajqlTfW9v+Jw+sevHg== X-CSE-MsgGUID: tpPpg9ZITyC704XOdE7NBQ== X-IronPort-AV: E=McAfee;i="6700,10204,11204"; a="30002035" X-IronPort-AV: E=Sophos;i="6.10,253,1719903600"; d="scan'208";a="30002035" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Sep 2024 18:17:11 -0700 X-CSE-ConnectionGUID: e+1Xguk5Qg+HYwyEEGg18Q== X-CSE-MsgGUID: zCaUK/6kQ9uT9pt7psyHcA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,253,1719903600"; d="scan'208";a="108688457" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.6]) by orviesa001.jf.intel.com with ESMTP; 23 Sep 2024 18:17:10 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, shakeel.butt@linux.dev, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org Cc: nanhai.zou@intel.com, wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v7 5/8] mm: zswap: Compress and store a specific page in a folio. Date: Mon, 23 Sep 2024 18:17:06 -0700 Message-Id: <20240924011709.7037-6-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240924011709.7037-1-kanchana.p.sridhar@intel.com> References: <20240924011709.7037-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Stat-Signature: 73pfzddr99n88b7o3c3tr5fqm6pkkfe4 X-Rspamd-Queue-Id: 5C9732000E X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1727140639-633004 X-HE-Meta: U2FsdGVkX19lvO7rFG1YdsMDFQ0CHRP6sZfvYSxpn69nNOaeoolariVH2tbnRPkRgcTcPsb55d6QgtFyZRgt08dyk6DtH877a5j0wXS+x2wLAHlsOjQeyWNn+QAzCHfRTpU9DjnI6jHT5cTVWGtgauaAIpLTSBIoyWh/rpWmEvqjkBmqoVE9Omd6/H1os39SKd9l4wLeZqihNdp8/ysvHgHXOLsRUxr8RpZSBNThRCVEz2AHH/whuA6C3WtaBp+N5y0FwV3eW7AyXmQWS4BpgKMNJNski9rPCDnTq1/epseuTXdUMTgmRmQ9GM/gPQ34hHBtBDT/cILC5v2EEINvnHYSwu+v7X+G2hcPK80f2o5W/0R+lSEqyh9TYySquZZc8UUn/doUSROCojqpsJYo/1X+CiDXP5DxEZVpzCeAhmE0fKAhbR1gSxeEuFhebhm48F8YpVfWkOUhAFKVma/grvKVx732NMGnT3Wqp0LLJuziltua8LqEZDuD781r4c0aBSMIIQId0xtUJg9GfpyPfCbcFzP0mrFhEMCCM+K7K59iJ1UuHffaHjoR9L71TNhGOUqH36tPw4y8EQdUiNrW7xWpB/nyJKO2D97fnn3jO6/HjnAxrvULnMaGVQRRnKS0G59lBlnXjg/Xd/nu8Eqg3UOcNC0eM2wKXq3rHSJKX/o6OWUmBzGZ1y00mKrAvoSv0bQRL5IEklh1jCd1YRdjrC1N/wsDCxwO3vpoZ/IEzqng71uif9C4DJ7HoAhpZWnwO/6lpp5kWEK862usm8/Mn/VjQqkA97EHRyw7CXTM+VYczOYqihhsT0tweLprwl+Tfi1kUvMSZxLSIZgQU1pK9/MVrp+UJ8+FzIJG27CN3aw6Ql6F0B+neGTNB6GIwRPjXD9Kfdn5hr1dLW7OAJNA8V4rKPleC9zveq8ZzV5CFdoTRmp9GQW77f4IwJWk4VoRnK0TUk9dYHMBzoUsWvN R8F/+pco 9P5vROQL05q++DZbUH706m3+GMzwIF0dMnwLCzPlIH7q112nDYp2tC3T9jGrdbwSlbSJVCgnamIZsEcHinXJ96DhwRyFsx55sO1Kd1XQN94wFTxdRaZGSpWY+MrjXbIu7SsxI4J3BKg5nuD+s2Wz2PgrVaBnO5V6zxsLzTE7zg95GvJXt0Cocpxu9yMZOWZoyqo4E2ZsRX5x8DrFbLbUEwZn6Jlda6hrYMeiH0mTdQ04hGUs9UFR36gDgUj7XqmM56hbr X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: For zswap_store() to handle mTHP folios, we need to iterate through each page in the mTHP, compress it and store it in the zswap pool. This patch introduces an auxiliary function zswap_store_page() that provides this functionality. The function signature reflects the design intent, namely, for it to be invoked by zswap_store() per-page in an mTHP. Hence, the folio's objcg and the zswap_pool to use are input parameters for sake of efficiency and consistency. The functionality in zswap_store_page() is reused and adapted from Ryan Roberts' RFC patch [1]: "[RFC,v1] mm: zswap: Store large folios without splitting" [1] https://lore.kernel.org/linux-mm/20231019110543.3284654-1-ryan.roberts@arm.com/T/#u Co-developed-by: Ryan Roberts Signed-off-by: Signed-off-by: Kanchana P Sridhar --- mm/zswap.c | 88 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 88 insertions(+) diff --git a/mm/zswap.c b/mm/zswap.c index 9bea948d653e..8f2e0ab34c84 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1463,6 +1463,94 @@ static void zswap_delete_stored_offsets(struct xarray *tree, } } +/* + * Stores the page at specified "index" in a folio. + * + * @folio: The folio to store in zswap. + * @index: Index into the page in the folio that this function will store. + * @objcg: The folio's objcg. + * @pool: The zswap_pool to store the compressed data for the page. + */ +static bool __maybe_unused zswap_store_page(struct folio *folio, long index, + struct obj_cgroup *objcg, + struct zswap_pool *pool) +{ + swp_entry_t swp = folio->swap; + int type = swp_type(swp); + pgoff_t offset = swp_offset(swp) + index; + struct page *page = folio_page(folio, index); + struct xarray *tree = swap_zswap_tree(swp); + struct zswap_entry *entry; + + if (objcg) + obj_cgroup_get(objcg); + + if (zswap_check_limits()) + goto reject; + + /* allocate entry */ + entry = zswap_entry_cache_alloc(GFP_KERNEL, folio_nid(folio)); + if (!entry) { + zswap_reject_kmemcache_fail++; + goto reject; + } + + /* if entry is successfully added, it keeps the reference */ + if (!zswap_pool_get(pool)) + goto freepage; + + entry->pool = pool; + + if (!zswap_compress(page, entry)) + goto put_pool; + + entry->swpentry = swp_entry(type, offset); + entry->objcg = objcg; + entry->referenced = true; + + if (!zswap_store_entry(tree, entry)) + goto store_failed; + + if (objcg) { + obj_cgroup_charge_zswap(objcg, entry->length); + count_objcg_event(objcg, ZSWPOUT); + } + + /* + * We finish initializing the entry while it's already in xarray. + * This is safe because: + * + * 1. Concurrent stores and invalidations are excluded by folio lock. + * + * 2. Writeback is excluded by the entry not being on the LRU yet. + * The publishing order matters to prevent writeback from seeing + * an incoherent entry. + */ + if (entry->length) { + INIT_LIST_HEAD(&entry->lru); + zswap_lru_add(&zswap_list_lru, entry); + } + + /* update stats */ + atomic_inc(&zswap_stored_pages); + count_vm_event(ZSWPOUT); + + return true; + +store_failed: + zpool_free(entry->pool->zpool, entry->handle); +put_pool: + zswap_pool_put(pool); +freepage: + zswap_entry_cache_free(entry); +reject: + obj_cgroup_put(objcg); + if (zswap_pool_reached_full) + queue_work(shrink_wq, &zswap_shrink_work); + + return false; +} + bool zswap_store(struct folio *folio) { long nr_pages = folio_nr_pages(folio);