From patchwork Tue Oct 1 05:32:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kanchana P Sridhar X-Patchwork-Id: 13817487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A464FCEB2FE for ; Tue, 1 Oct 2024 05:37:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5BC316B0207; Tue, 1 Oct 2024 01:37:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 56BF86B020A; Tue, 1 Oct 2024 01:37:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1502D280036; Tue, 1 Oct 2024 01:37:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D054F6B0117 for ; Tue, 1 Oct 2024 01:37:16 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 971A41C68C4 for ; Tue, 1 Oct 2024 05:37:16 +0000 (UTC) X-FDA: 82623925272.28.D8C4A30 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.11]) by imf18.hostedemail.com (Postfix) with ESMTP id 73EDC1C0003 for ; Tue, 1 Oct 2024 05:37:14 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Q4DLvfYa; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf18.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.11 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727761014; a=rsa-sha256; cv=none; b=uUccNhW/lJc7wdLV+MLbGrMlZZf2nlYowgQGg07HVCMuNUAvbXcmjSX9sbUC0BJuDEsqp1 6Ee0RDmjwBZBNi8KGLSkD9pm+9lSwT4ez7P9uXYmcQ9xCg6xektQ0dQeKs6lAHxp5erWgc LAt8m8OgzGz0vTBLgLngSW3S/cH+AVw= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Q4DLvfYa; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf18.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 192.198.163.11 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727761014; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Zzb0n7WbQynndp9X8ptSglsXUQTU708+oyK3zIdoDgk=; b=2IK/Vpn+3e8xbjLLtQLWBJ2j4hIFUWupmLC3bWJU3iBETdDUUbOwAPCq/t8UyLC/1gwZ6l i0ONQMMDw+WpryOKXcnQD9mbikY4VCC6SvWeGpzZUXVy6eI+SwejYvqUtWcnwFwOpDqqK0 jsnaUCsavw4a1NTm6A9vgYWRfLb2zQw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1727761035; x=1759297035; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EoDLTcNBMro+ufgrjaTKFoCYNP0vIAE8K0H1FKjnXKw=; b=Q4DLvfYadEODvLUPcjd8R7zv/Qs3a4+F3XMrW1KM1heV37U9PqBVna4a THIYs6meNFcckiBbaxCNkXM6/iJ8oKKC9Cd+H2+sE7/H0cMbWJ2QXikAo 19L+OLbFpu1LMo2t5mfaSE1zH9xQuRk7S8NWeAYUAfd/BF9dMcS37gVx/ b5gtt0uYSPko7z0/33WNeRRfU/bdm5vEAcbCz0FefVz0H0obgsmjFCRVj sDMYzPACIbMOTeWbKinMGfapDN167P4H8vAvL82hZz838Zme5t8IgRHdi he8CpR4rs85cB8YuRkRWoASjo7g5xrCWTSnwvs9d4F0tVYadWPD6sJgea Q==; X-CSE-ConnectionGUID: H7JYJuhHRMGY2QkCrytmOQ== X-CSE-MsgGUID: 018AqLykTqylwbEVz8A3iw== X-IronPort-AV: E=McAfee;i="6700,10204,11211"; a="37465141" X-IronPort-AV: E=Sophos;i="6.11,167,1725346800"; d="scan'208";a="37465141" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Sep 2024 22:32:27 -0700 X-CSE-ConnectionGUID: yFJuADSqS4eTxH7oBtWwig== X-CSE-MsgGUID: HpaMM9bDQFOGdwAg5THVcQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,167,1725346800"; d="scan'208";a="73205821" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.6]) by fmviesa007.fm.intel.com with ESMTP; 30 Sep 2024 22:32:27 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, shakeel.butt@linux.dev, ryan.roberts@arm.com, ying.huang@intel.com, 21cnbao@gmail.com, akpm@linux-foundation.org, willy@infradead.org Cc: nanhai.zou@intel.com, wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v10 7/7] mm: swap: Count successful large folio zswap stores in hugepage zswpout stats. Date: Mon, 30 Sep 2024 22:32:22 -0700 Message-Id: <20241001053222.6944-8-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241001053222.6944-1-kanchana.p.sridhar@intel.com> References: <20241001053222.6944-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: hzr589fybjsjtgabnjptkk38zx53kdyt X-Rspamd-Queue-Id: 73EDC1C0003 X-Rspamd-Server: rspam02 X-HE-Tag: 1727761034-725857 X-HE-Meta: U2FsdGVkX19eeRMm7d3MNivCLtEmKLWiAcprB02OgbWfa4azG0tRPhedt95CabLbD7Ul/KfUyh9Yv7qnZvklDG8B4yp52QaovTYGRCTj7CYSEEaH4rZsRk0FwtEVdMyXkgF3EhOsLsdoywTOsptS2Oruzoaf302l9+S06m+dEM8X5eDWTeK0Se7EnQh5i2rl+42esUP+BBRJNxuIdotMAnr7XvExs01ghRW9TqVHcffxMltPKABGDu6nZIYpAtBgxjZIkm481FkWGSiI2uQyUYpspqI0a6djQaQdBBjtybpjeHoYbkIObID/aM9p4PMNfDgobSVBYQXxKdTe3DEx+1AfCrKbYD4QyALriT+Sbe8k5e5rUGKWbf8PcWfcF3an5wg3i2Ul+Gji731+ug/cI5ffJjmC5fP5LC6VqUs6pVm1HBtF3Rpeo/ztc5Fac7SVMgw9k2qs1MPe9MJK9dQ1F42z7LhOrQCbUJT5+QRGwA434Cz/DNB4tI7QUxtAQRgRV3xPtdRrq5njIvgR2E1KZb+sDrGqX5QNKhC/4qrfYdhKwPhK5RLZEo0Ecn/U9CxQoP60h33sLo+CnidTZqOHq1cMrX2gSC6hON1rxMg7i9uO/vfRRoxbbQg1zpp7CSBKtgzdfBTSVDmcRDxdw9BcI22OJjndHOtAsIo4A2MoXx+Qu6m0kYiABcc93DZ7cVQ5EmrWffic1sMsGRdrqTDKdJ+Xn9lOOwLq4p5R1J4bvXqL+QrFEiYhgVkp4BNBrfT/uUOSK57REXUgAI2JvSsNSjKmEnxaz+YAbTBxPIy2iaDCcGLFgVxh5yOBupE8r7Eaa/jSVSzM98UZwtyoKxzXJf51F32NdArjIiv/DKCgQjez7ddCdLu4fP+186Fx4qKo0q3506r+YgxMw2x651adlnoTPGEmV7Myb2uNbq1VtiAoxAyJZbUgBdaUqm5gyIGlV91oWfVy9l9JOlZOaEC rNLbKVF4 BIQ6RA1ERLV2DWJEnF/sI274I9HJ90MwcFj14DpfnPJzp62k5dmS5WVkvuXLPcF1CMJ1M5P6se3ieF5gMX6Q7WeFjRNeAhXr8Man23r9EY6sxw+vr856dR8w+KRRy6mlx6vSk2e2ijOHodMw/AsmNtIEP3eKeGt6GA2rAvaffzqQjJ3dzpdda7S8JuStg9/f8zhVT2FAm1eCdI5o= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Added a new MTHP_STAT_ZSWPOUT entry to the sysfs transparent_hugepage stats so that successful large folio zswap stores can be accounted under the per-order sysfs "zswpout" stats: /sys/kernel/mm/transparent_hugepage/hugepages-*kB/stats/zswpout Other non-zswap swap device swap-out events will be counted under the existing sysfs "swpout" stats: /sys/kernel/mm/transparent_hugepage/hugepages-*kB/stats/swpout Also, added documentation for the newly added sysfs per-order hugepage "zswpout" stats. The documentation clarifies that only non-zswap swapouts will be accounted in the existing "swpout" stats. Signed-off-by: Kanchana P Sridhar Reviewed-by: Nhat Pham --- Documentation/admin-guide/mm/transhuge.rst | 8 ++++++-- include/linux/huge_mm.h | 1 + mm/huge_memory.c | 3 +++ mm/page_io.c | 1 + 4 files changed, 11 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst index cfdd16a52e39..2a171ed5206e 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -530,10 +530,14 @@ anon_fault_fallback_charge instead falls back to using huge pages with lower orders or small pages even though the allocation was successful. -swpout - is incremented every time a huge page is swapped out in one +zswpout + is incremented every time a huge page is swapped out to zswap in one piece without splitting. +swpout + is incremented every time a huge page is swapped out to a non-zswap + swap device in one piece without splitting. + swpout_fallback is incremented if a huge page has to be split before swapout. Usually because failed to allocate some continuous swap space diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 5eb4b0376c7d..3eca60f3d512 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -119,6 +119,7 @@ enum mthp_stat_item { MTHP_STAT_ANON_FAULT_ALLOC, MTHP_STAT_ANON_FAULT_FALLBACK, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, + MTHP_STAT_ZSWPOUT, MTHP_STAT_SWPOUT, MTHP_STAT_SWPOUT_FALLBACK, MTHP_STAT_SHMEM_ALLOC, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 243c15912105..a7b05f4c2a5e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -611,6 +611,7 @@ static struct kobj_attribute _name##_attr = __ATTR_RO(_name) DEFINE_MTHP_STAT_ATTR(anon_fault_alloc, MTHP_STAT_ANON_FAULT_ALLOC); DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK); DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); +DEFINE_MTHP_STAT_ATTR(zswpout, MTHP_STAT_ZSWPOUT); DEFINE_MTHP_STAT_ATTR(swpout, MTHP_STAT_SWPOUT); DEFINE_MTHP_STAT_ATTR(swpout_fallback, MTHP_STAT_SWPOUT_FALLBACK); #ifdef CONFIG_SHMEM @@ -629,6 +630,7 @@ static struct attribute *anon_stats_attrs[] = { &anon_fault_fallback_attr.attr, &anon_fault_fallback_charge_attr.attr, #ifndef CONFIG_SHMEM + &zswpout_attr.attr, &swpout_attr.attr, &swpout_fallback_attr.attr, #endif @@ -659,6 +661,7 @@ static struct attribute_group file_stats_attr_grp = { static struct attribute *any_stats_attrs[] = { #ifdef CONFIG_SHMEM + &zswpout_attr.attr, &swpout_attr.attr, &swpout_fallback_attr.attr, #endif diff --git a/mm/page_io.c b/mm/page_io.c index bc1183299a7d..4aa34862676f 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -269,6 +269,7 @@ int swap_writepage(struct page *page, struct writeback_control *wbc) swap_zeromap_folio_clear(folio); } if (zswap_store(folio)) { + count_mthp_stat(folio_order(folio), MTHP_STAT_ZSWPOUT); folio_unlock(folio); return 0; }