From patchwork Mon Sep 30 22:12:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sridhar, Kanchana P" X-Patchwork-Id: 13817149 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AD07CEB2C2 for ; Mon, 30 Sep 2024 22:12:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8DDA228002D; Mon, 30 Sep 2024 18:12:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8299D280038; Mon, 30 Sep 2024 18:12:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 38E9D28002D; Mon, 30 Sep 2024 18:12:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 16B8E280036 for ; Mon, 30 Sep 2024 18:12:32 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id CBE8D1C6343 for ; Mon, 30 Sep 2024 22:12:31 +0000 (UTC) X-FDA: 82622804502.12.54FC89C Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by imf21.hostedemail.com (Postfix) with ESMTP id C59731C0007 for ; Mon, 30 Sep 2024 22:12:29 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=nH2SJM+y; spf=pass (imf21.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.9 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727734224; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9lDffjabhJBUzn5h9qpLxQIzx4SS88A69IWNt0BF5D0=; b=UKkIzPAXdtFIL3gOXMbNwKWZ4kMcwCLTT2UQDyg92tRH/HsXeZo8yBga12HmEA/tcfwKww BmcFJ7eZzXUEBTChILa4B/FdwaaQGjgdx0lgyAxkxFb9j/xYVe9uJsBTRliVIaAkB/mVmj pbIHYb/lQXOKOIbgSU+bW2Po6Vz8aTk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727734224; a=rsa-sha256; cv=none; b=2keocTFmZIOqkxJWWhWYSDqIzJvyL35Q89zEXqGB3SwVZKgbM9fQ7tgYPltaCFrBhIdM/E 997SXT/prLiBp3rVBLXFkms5CcWW66yu7LqzdzVrAOohCIYZXm/D5YZx/TG9BfEKzL4cdb xL/CXHtil396wyPBiKq7PPnQZv0MJRo= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=nH2SJM+y; spf=pass (imf21.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.9 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1727734350; x=1759270350; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9WsgnlbaWxDYzoSeOHrRpgB8xi3C6kSbFeaymu5ESD4=; b=nH2SJM+yFKKS5sBT1n5XXWFWRxD0B3F6YK4tKKhyFPj7S0cPE4Sgib3g eX41ehwVOhiL6dnfTW/vkNXGXsN/hzuzWR7wgb1miNKAqkPiVu2ONYLgo chTVUVdBsIgdaC3v4ILYpLLjLYxR54GYSluTXfrBGcXGlISZk96NJ4j6G RDKRWhpi4cOAoci9YasrwDEwkUju6OCB/po6BEXli8e/FuRVwMIHh7HY7 rO+PhqiGNxs7aSroiu+QFWP4kDPRVGPUUy0s9oJNGiW+aprEA+LekbAh4 RHmBILhKUUdNBbwBVyzSKIaBtHZmoiYHaNsODkGuwj3kMT8YdYi1AltO8 A==; X-CSE-ConnectionGUID: JfeC2ayvSpmEIfA0KaCwLg== X-CSE-MsgGUID: nLSSj6UMQ0OG0nbdpDRnVw== X-IronPort-AV: E=McAfee;i="6700,10204,11211"; a="49368458" X-IronPort-AV: E=Sophos;i="6.11,166,1725346800"; d="scan'208";a="49368458" Received: from fmviesa004.fm.intel.com ([10.60.135.144]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Sep 2024 15:12:25 -0700 X-CSE-ConnectionGUID: T9yraZJRTZCRQZahtHf09g== X-CSE-MsgGUID: prXEZfqVSHWeMdbJj+LIig== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,166,1725346800"; d="scan'208";a="77985597" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.6]) by fmviesa004.fm.intel.com with ESMTP; 30 Sep 2024 15:12:24 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, shakeel.butt@linux.dev, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, willy@infradead.org Cc: nanhai.zou@intel.com, wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v9 7/7] mm: swap: Count successful large folio zswap stores in hugepage zswpout stats. Date: Mon, 30 Sep 2024 15:12:21 -0700 Message-Id: <20240930221221.6981-8-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240930221221.6981-1-kanchana.p.sridhar@intel.com> References: <20240930221221.6981-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: C59731C0007 X-Stat-Signature: yohskcg3tynyg6tgjz9hiyx3gnmeeug5 X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1727734349-810115 X-HE-Meta: U2FsdGVkX19GFQEQaOwXCJrzRbweqvSKSeGoGF8kigKV9SYc27dfDKPcuBw03aJ5WD0lg/QVuarm9DAYOfOQkcfQLPUk+tDrWwwLdyxQGlmuB3tfdaOazkkmrgPkSwnIRT0kZ7/0QW5cpgD6CuKtu4B4Zu+Fl4gUAf9Qxb3mkKddGtx9MU0+1icDxOTvdlpfZDKsyDhfu+OUYVTLKNHnw+cN8HSe6lvAMJY3E12gAxcfBSXt1D0yGApA8l8mbIvR+eDMWEdcPP6MvL8eUbjxGcJgvrigVl9kVGyA6nvVvw/15P5ocSRfnTfYoGyd6TBc68DlZJyrjGZuviQAXxUbGCAj8eaQ5O+apIr1JC9UKhe1cRaEtAMLEw/G2ZhE+JFI4TKk/22S/KC1HFIvSOzvdm3k3YJdiuSUa8nAXeWr0SL18+J/YoJzse93SrE2zvGODOxFiMRxk7ShC57UzSq9iykY4XBoJmCr+iUIOl7KfYjCu0cuhqgbniAqSb0ski/i87uN7P+rKJwpiW21wj72dXSWsvmx1cDZuY06m/3da7Fo91gL8ctMDnxymSIrUCQBAd+KjiYtkrVi9eoUYGY6XZMJTvLg9Cb8HtQ4wLhWzTJXN2YF9XLh3r8jYj8g/hDyvYARkdRZMBufV9Q/bvWIndY1aeEadwhDMYwAO84daSkQaPWdk3957bohwiJWz1yPruYqRiJRAg9F32WEvLtR7HPfbGNletHErhgnkwbgoN+akFI9NMhRL5/7XS6cqd8JRVyLdr0Y25jHAb0++0WhGCz0uVP2MTo8QWqGrIbmivtY29WpN+qgtJzvUBe4qIE2KWMujEryAqbGDxssQetSi87JsOfDDdJ5sGAxzkOK9LGqab6xF71mALBiEi4p6sfe2eYR6z5mBP7wg/3zatNO+ze/65nFnBtl+BSKIz/izCR2XEw6+f2vNgeNgcoeOHeaq+seJQP0a+/57n6fI7Q IWGr5dgF H5uBUFKU61tb1+5jomCXgK1hEaNfq1yAlsFg9maxgu1IwhfWgyI5dA+2MtdfdcPdS3QWKUrV/UFCuHVXB02T5seH2qfLL3A/Qdgc1gQzmrSAzc3LOqjCBrBa9Pw203O5HwVZlxwE5xXl1wMz2eSTBA18o3Xt9jLrGZYU+madMwa+7p64CxKGYlPzbZxtwG6ep0D1H7VgmjhP9wSk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Added a new MTHP_STAT_ZSWPOUT entry to the sysfs transparent_hugepage stats so that successful large folio zswap stores can be accounted under the per-order sysfs "zswpout" stats: /sys/kernel/mm/transparent_hugepage/hugepages-*kB/stats/zswpout Other non-zswap swap device swap-out events will be counted under the existing sysfs "swpout" stats: /sys/kernel/mm/transparent_hugepage/hugepages-*kB/stats/swpout Also, added documentation for the newly added sysfs per-order hugepage "zswpout" stats. The documentation clarifies that only non-zswap swapouts will be accounted in the existing "swpout" stats. Signed-off-by: Kanchana P Sridhar Reviewed-by: Nhat Pham --- Documentation/admin-guide/mm/transhuge.rst | 8 ++++++-- include/linux/huge_mm.h | 1 + mm/huge_memory.c | 3 +++ mm/page_io.c | 1 + 4 files changed, 11 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst index cfdd16a52e39..2a171ed5206e 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -530,10 +530,14 @@ anon_fault_fallback_charge instead falls back to using huge pages with lower orders or small pages even though the allocation was successful. -swpout - is incremented every time a huge page is swapped out in one +zswpout + is incremented every time a huge page is swapped out to zswap in one piece without splitting. +swpout + is incremented every time a huge page is swapped out to a non-zswap + swap device in one piece without splitting. + swpout_fallback is incremented if a huge page has to be split before swapout. Usually because failed to allocate some continuous swap space diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 5eb4b0376c7d..3eca60f3d512 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -119,6 +119,7 @@ enum mthp_stat_item { MTHP_STAT_ANON_FAULT_ALLOC, MTHP_STAT_ANON_FAULT_FALLBACK, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, + MTHP_STAT_ZSWPOUT, MTHP_STAT_SWPOUT, MTHP_STAT_SWPOUT_FALLBACK, MTHP_STAT_SHMEM_ALLOC, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 13bf59b84075..f596f57a3a90 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -611,6 +611,7 @@ static struct kobj_attribute _name##_attr = __ATTR_RO(_name) DEFINE_MTHP_STAT_ATTR(anon_fault_alloc, MTHP_STAT_ANON_FAULT_ALLOC); DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK); DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); +DEFINE_MTHP_STAT_ATTR(zswpout, MTHP_STAT_ZSWPOUT); DEFINE_MTHP_STAT_ATTR(swpout, MTHP_STAT_SWPOUT); DEFINE_MTHP_STAT_ATTR(swpout_fallback, MTHP_STAT_SWPOUT_FALLBACK); #ifdef CONFIG_SHMEM @@ -629,6 +630,7 @@ static struct attribute *anon_stats_attrs[] = { &anon_fault_fallback_attr.attr, &anon_fault_fallback_charge_attr.attr, #ifndef CONFIG_SHMEM + &zswpout_attr.attr, &swpout_attr.attr, &swpout_fallback_attr.attr, #endif @@ -659,6 +661,7 @@ static struct attribute_group file_stats_attr_grp = { static struct attribute *any_stats_attrs[] = { #ifdef CONFIG_SHMEM + &zswpout_attr.attr, &swpout_attr.attr, &swpout_fallback_attr.attr, #endif diff --git a/mm/page_io.c b/mm/page_io.c index bc1183299a7d..4aa34862676f 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -269,6 +269,7 @@ int swap_writepage(struct page *page, struct writeback_control *wbc) swap_zeromap_folio_clear(folio); } if (zswap_store(folio)) { + count_mthp_stat(folio_order(folio), MTHP_STAT_ZSWPOUT); folio_unlock(folio); return 0; }