From patchwork Sat May 4 07:30:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuanchu Xie X-Patchwork-Id: 13653804 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 904756FC7 for ; Sat, 4 May 2024 07:30:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714807849; cv=none; b=q3oEVmztv0+f8P60xNvIgsJ5oeUUrX/1EkS7uXSSZxsvJ/LC6mtGan/6XT1bdYgW9CmOX1rjq2ZTv4jD25RLWvCKxKxWnNYl9RNhIsoBnIBi4Ne3UwIrgv2n0GvYVM6q2QodCR94/SnQqEw7r5latyoujAWaDvZMy1fWxt06irA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714807849; c=relaxed/simple; bh=ZKj25IG7pniPh/0wD4wqB4Mjlk8/8o0tO/bn/N0I1og=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=hhgsEL8G3bOXD+m41sJAmfWPRj38PsQE9ulD4ul0kF/jusIjBRG30TBcSoK0od3UuOgj3cYeZnIQbPcRA1D8VvQx5+dzAEbjVQVrVzEZ9DCi1LpSdWEoHh5eM7gSwVbq5Wdhqj2FKV+tayn6ByMoI49a4rwftRMLv01szk6uzFc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yuanchu.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WKGbaUgh; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yuanchu.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WKGbaUgh" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dbf618042daso900466276.0 for ; Sat, 04 May 2024 00:30:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714807846; x=1715412646; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/qRNmge9VPkattIC8VKnofvZP4QSD6CRqtGoSzsf/P0=; b=WKGbaUghCePr+7vm8VaXl/hEEFNeDzPpg5jHtvxl4c+dJeKXQytTjo+y7mGJOTAq8v kJbHZSat6NpBXYtCJ+6eL79OFyjP7ZLDsMW86hA2SjBtmFov7PytjtvlIDg1wkBNUrtd +121aYx6ljjiDUDR6PT/Esn81Nee6Y7Ji6gy06YitZiYLrriLDlijTWMj//0WXK+vALq g0nchDQuP3B7+hcOFUe+dNl+xjWnPhvykkRqxoqhuES+qYysp18W6RaSoj/p8ZP6ceoH acw39kIHwYUqnwYNVlyX2xB+mvMyDMR/WptVabehO+JHxsp73O7/icXoDFPql5YK2GJk M4YQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714807846; x=1715412646; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/qRNmge9VPkattIC8VKnofvZP4QSD6CRqtGoSzsf/P0=; b=LRfST+qn/6a8cG2SKQF6s6/udZMJG8qebx+eolpPc6zkleorjINe7vmF09ys6SPwPF OW94ey9UrjtXK5zoynaXi2YWlg+BB4FW5dcrYSXpjGzMv83pUSBJ3UAVpBuL2avYSFVr eNFgwXQjOcgIoiaTJwr4eyFP4VoaVgc6AfllEw4Te9gKnF3rWhf83JynqLr2Jk+mTCz8 MZSqjyValMSUSqB0v+/gmtp3n5wb6M0HYclOSrkIaK294GDifUHgHbg76PhtxGwf51FT dT5sMl9U68tEjZnRITT1x1K9HEwBJoHcKFUP2z4HOrMwK4ZEkeHRe97zNoP0m5Q0dxOM wtYA== X-Forwarded-Encrypted: i=1; AJvYcCUYNgka8JqC/2dZYKJnL7ptks0UNjpSQ2AldmTWpAONA+uCcfnfnjiMk3vs/ZLIxNfrITn/7ZAdkHVNNFGVOl6bpXotRu9CgKONnYAztLQX X-Gm-Message-State: AOJu0YzIEmCceff74xRzX3XXyJY4BCc5EjTsP4Dnpdi+KPR7/3MJYJww 2qd45eyNs5kUEqAC/U5UodnBrDQXFrsxL7t6th2LKfmgF+REt4NtkrEStwRIsip//ZS8nuYDN6w lqFkzbQ== X-Google-Smtp-Source: AGHT+IE9krkIkSbJnvwj2NL+PSHxSR2wSCv1KndjNbNT83C6ldeF/Rb3CC7Y0V2+f15W7Gb9F6OShS6x5yDQ X-Received: from yuanchu-desktop.svl.corp.google.com ([2620:15c:2a3:200:da8f:bd07:9977:eb21]) (user=yuanchu job=sendgmr) by 2002:a05:6902:726:b0:dd9:2a64:e98a with SMTP id l6-20020a056902072600b00dd92a64e98amr544178ybt.9.1714807846444; Sat, 04 May 2024 00:30:46 -0700 (PDT) Date: Sat, 4 May 2024 00:30:05 -0700 In-Reply-To: <20240504073011.4000534-1-yuanchu@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240504073011.4000534-1-yuanchu@google.com> X-Mailer: git-send-email 2.45.0.rc1.225.g2a3ae87e7f-goog Message-ID: <20240504073011.4000534-2-yuanchu@google.com> Subject: [PATCH v1 1/7] mm: multi-gen LRU: ignore non-leaf pmd_young for force_scan=true From: Yuanchu Xie To: David Hildenbrand , "Aneesh Kumar K.V" , Khalid Aziz , Henry Huang , Yu Zhao , Dan Williams , Gregory Price , Huang Ying Cc: Kalesh Singh , Wei Xu , David Rientjes , Greg Kroah-Hartman , "Rafael J. Wysocki" , Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Shuah Khan , Yosry Ahmed , Matthew Wilcox , Sudarshan Rajagopalan , Kairui Song , "Michael S. Tsirkin" , Vasily Averin , Nhat Pham , Miaohe Lin , Qi Zheng , Abel Wu , "Vishal Moola (Oracle)" , Kefeng Wang , Yuanchu Xie , linux-kernel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kselftest@vger.kernel.org When non-leaf pmd accessed bits are available, MGLRU page table walks can clear the non-leaf pmd accessed bit and ignore the accessed bit on the pte if it's on a different node, skipping a generation update as well. If another scan occurrs on the same node as said skipped pte. the non-leaf pmd accessed bit might remain cleared and the pte accessed bits won't be checked. While this is sufficient for reclaim-driven aging, where the goal is to select a reasonably cold page, the access can be missed when aging proactively for workingset estimation of a of a node/memcg. In more detail, get_pfn_folio returns NULL if the folio's nid != node under scanning, so the page table walk skips processing of said pte. Now the pmd_young flag on this pmd is cleared, and if none of the pte's are accessed before another scan occurrs on the folio's node, the pmd_young check fails and the pte accessed bit is skipped. Since force_scan disables various other optimizations, we check force_scan to ignore the non-leaf pmd accessed bit. Signed-off-by: Yuanchu Xie --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 4f9c854ce6cc..1a7c7d537db6 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3522,7 +3522,7 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end, walk->mm_stats[MM_NONLEAF_TOTAL]++; - if (should_clear_pmd_young()) { + if (!walk->force_scan && should_clear_pmd_young()) { if (!pmd_young(val)) continue; From patchwork Sat May 4 07:30:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuanchu Xie X-Patchwork-Id: 13653805 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 971E411723 for ; Sat, 4 May 2024 07:30:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714807852; cv=none; b=McB/9FIoxnHM6vRIZz9893Cjj+vN4mpY7ZCo1hjl6ML+0YPny15tkbcuwunKkB56xUzYMsTMn9TvXbeN6BBiuezXuSmYI6mwWlubDbZTQJxmT8tXE90qhNbaESClYr7B74vS7Pt3MNx7Oni3dzY02/0RG0XDWOf0spA5ivAOJq4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714807852; c=relaxed/simple; bh=QVMqH9d0it4qIkcZCQESGKqws1HRsFqpZl0aWCgEt2E=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Yzy5DQe5soLLM/hattKPfdX/gMv1hwa45dLZgfy6v4o9MR3us2ufwMw4UnPLEIr0pv12/58OsyWhNXgiWgWD1uXTkPMnk7YnWz2dlrfXYz+PdHgq/spfcbywXGHzQ/UU/OluvAb+r6L7YhbWQTj3ec2VBPghXjKrzReFavjI3a4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yuanchu.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ThCMsZtd; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yuanchu.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ThCMsZtd" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-61be3f082b0so7200877b3.1 for ; Sat, 04 May 2024 00:30:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714807849; x=1715412649; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dH44YFbsI/wViZnIs4Aonmrz8ZyFoelrelW04d6e1qc=; b=ThCMsZtdRG3MuJv/NpClwSOEL2kZ6XRW8x9E8Gx8w35egAU+cbrRltllsD9JxExcxd slKrqbUubdnYqGu1H1B2tF9BoJa7xkGziAEK0tiInNcE6vkPUqKzb0Q40+vCGqab3NyL KsfodYA5nCw+5UiuxbGtztlGG3f5dnklCgVzdDiYp5ySgTjGrbxVALPBp9LyzM+l4PT0 YeG6PxjWeH/1C0/nFTEzGoznR7f6KTZK6uc8hKbugw7n/XzzY6p/BiO936vjQdjVJ2zG s4Cy0Tdpx7/prR/TVTPajyuxEgOB1pe9W7MTPl9/g6sl9PSyG6T9NRM0IZAnPiSXxvWm Mx7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714807849; x=1715412649; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dH44YFbsI/wViZnIs4Aonmrz8ZyFoelrelW04d6e1qc=; b=J/qcn95E+EFJOAbYjyesV6p9d4cHc4WIrQbp8jb6Oc2U7PgXcotzgkjd+4xUxSdPJF zWhNuoaTErIAHkAg/0v7y0SVFcujE4SklWc86WIqBBcT4iXVg15adN4wtT0QjbodOs+T vnLgdy28SbmC9uV1QBVKq9ZsoRprFjSUYfsfBkn8Ts3cZTnhiiBOpOabiW3T0Xa+YKm4 OxrknOsO+lmIl2yq6ILVDJqgwqP1vTSz/yvx2oGkhMIXMGViRyvisaRbEgYD+L29BPNr jIfq30sLCRM7JE5YldSdvFL2tsozuVCu5VsliqQaldkC0yyinFYsdgbVWXfGSILMW7L/ roUQ== X-Forwarded-Encrypted: i=1; AJvYcCXB+soqbLs0XC2pfrdy2R8Ja+Oy3da74Hg/0XUKfMcw/SJTC3DUK2qLA9hHi6evo0qf+M79MXD06ojvJtLH5nZaheS2jj48aWaNewTyPW9q X-Gm-Message-State: AOJu0YzJPK53h6dvJbbc8SJyW9LHqS3Aa0PBmG4EPodtcCsikSz3UDUg GNMren6BTfOGQ3GiUsBDY586WjKfyWBCI1w/X034DGn9V9/dkOnZuDk8SmEDK7iZ3NugWyc+q/K c0Xkhbw== X-Google-Smtp-Source: AGHT+IFcMAMPXIkyMkHP8ysggr4nF8ifhyWpORS93sMeo0wY3uA25ty9ehoTDuN2Dk7E8tmJu5w76waYdzEo X-Received: from yuanchu-desktop.svl.corp.google.com ([2620:15c:2a3:200:da8f:bd07:9977:eb21]) (user=yuanchu job=sendgmr) by 2002:a81:6fc3:0:b0:61b:e6a8:a8a with SMTP id k186-20020a816fc3000000b0061be6a80a8amr1032920ywc.6.1714807848626; Sat, 04 May 2024 00:30:48 -0700 (PDT) Date: Sat, 4 May 2024 00:30:06 -0700 In-Reply-To: <20240504073011.4000534-1-yuanchu@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240504073011.4000534-1-yuanchu@google.com> X-Mailer: git-send-email 2.45.0.rc1.225.g2a3ae87e7f-goog Message-ID: <20240504073011.4000534-3-yuanchu@google.com> Subject: [PATCH v1 2/7] mm: aggregate working set information into histograms From: Yuanchu Xie To: David Hildenbrand , "Aneesh Kumar K.V" , Khalid Aziz , Henry Huang , Yu Zhao , Dan Williams , Gregory Price , Huang Ying Cc: Kalesh Singh , Wei Xu , David Rientjes , Greg Kroah-Hartman , "Rafael J. Wysocki" , Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Shuah Khan , Yosry Ahmed , Matthew Wilcox , Sudarshan Rajagopalan , Kairui Song , "Michael S. Tsirkin" , Vasily Averin , Nhat Pham , Miaohe Lin , Qi Zheng , Abel Wu , "Vishal Moola (Oracle)" , Kefeng Wang , Yuanchu Xie , linux-kernel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kselftest@vger.kernel.org Hierarchically aggregate all memcgs' MGLRU generations and their page counts into working set page age histograms. The histograms break down the system's working set per-node, per-anon/file. The sysfs interfaces are as follows: /sys/devices/system/node/nodeX/page_age A per-node page age histogram, showing an aggregate of the node's lruvecs. The information is extracted from MGLRU's per-generation page counters. Reading this file causes a hierarchical aging of all lruvecs, scanning pages and creates a new generation in each lruvec. For example: 1000 anon=0 file=0 2000 anon=0 file=0 100000 anon=5533696 file=5566464 18446744073709551615 anon=0 file=0 /sys/devices/system/node/nodeX/page_age_interval A comma separated list of time in milliseconds that configures what the page age histogram uses for aggregation. Signed-off-by: Yuanchu Xie --- drivers/base/node.c | 6 + include/linux/mmzone.h | 9 + include/linux/workingset_report.h | 79 ++++++ mm/Kconfig | 9 + mm/Makefile | 1 + mm/internal.h | 9 + mm/memcontrol.c | 2 + mm/mm_init.c | 2 + mm/mmzone.c | 2 + mm/vmscan.c | 32 +++ mm/workingset_report.c | 438 ++++++++++++++++++++++++++++++ 11 files changed, 589 insertions(+) create mode 100644 include/linux/workingset_report.h create mode 100644 mm/workingset_report.c diff --git a/drivers/base/node.c b/drivers/base/node.c index 1c05640461dd..81bf0c68efca 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -20,6 +20,8 @@ #include #include #include +#include +#include static const struct bus_type node_subsys = { .name = "node", @@ -625,6 +627,7 @@ static int register_node(struct node *node, int num) } else { hugetlb_register_node(node); compaction_register_node(node); + wsr_init_sysfs(node); } return error; @@ -641,6 +644,9 @@ void unregister_node(struct node *node) { hugetlb_unregister_node(node); compaction_unregister_node(node); + wsr_remove_sysfs(node); + wsr_destroy_lruvec(mem_cgroup_lruvec(NULL, NODE_DATA(node->dev.id))); + wsr_destroy_pgdat(NODE_DATA(node->dev.id)); node_remove_accesses(node); node_remove_caches(node); device_unregister(&node->dev); diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index a497f189d988..3e94d76c8f29 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -24,6 +24,7 @@ #include #include #include +#include /* Free memory management - zoned buddy allocator. */ #ifndef CONFIG_ARCH_FORCE_MAX_ORDER @@ -625,6 +626,9 @@ struct lruvec { struct lru_gen_mm_state mm_state; #endif #endif /* CONFIG_LRU_GEN */ +#ifdef CONFIG_WORKINGSET_REPORT + struct wsr_state wsr; +#endif /* CONFIG_WORKINGSET_REPORT */ #ifdef CONFIG_MEMCG struct pglist_data *pgdat; #endif @@ -1398,6 +1402,11 @@ typedef struct pglist_data { struct lru_gen_memcg memcg_lru; #endif +#ifdef CONFIG_WORKINGSET_REPORT + struct mutex wsr_update_mutex; + struct wsr_report_bins __rcu *wsr_page_age_bins; +#endif + CACHELINE_PADDING(_pad2_); /* Per-node vmstats */ diff --git a/include/linux/workingset_report.h b/include/linux/workingset_report.h new file mode 100644 index 000000000000..d7c2ee14ec87 --- /dev/null +++ b/include/linux/workingset_report.h @@ -0,0 +1,79 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _LINUX_WORKINGSET_REPORT_H +#define _LINUX_WORKINGSET_REPORT_H + +#include +#include + +struct mem_cgroup; +struct pglist_data; +struct node; +struct lruvec; + +#ifdef CONFIG_WORKINGSET_REPORT + +#define WORKINGSET_REPORT_MIN_NR_BINS 2 +#define WORKINGSET_REPORT_MAX_NR_BINS 32 + +#define WORKINGSET_INTERVAL_MAX ((unsigned long)-1) +#define ANON_AND_FILE 2 + +struct wsr_report_bin { + unsigned long idle_age; + unsigned long nr_pages[ANON_AND_FILE]; +}; + +struct wsr_report_bins { + /* excludes the WORKINGSET_INTERVAL_MAX bin */ + unsigned long nr_bins; + /* last bin contains WORKINGSET_INTERVAL_MAX */ + unsigned long idle_age[WORKINGSET_REPORT_MAX_NR_BINS]; + struct rcu_head rcu; +}; + +struct wsr_page_age_histo { + unsigned long timestamp; + struct wsr_report_bin bins[WORKINGSET_REPORT_MAX_NR_BINS]; +}; + +struct wsr_state { + /* breakdown of workingset by page age */ + struct mutex page_age_lock; + struct wsr_page_age_histo *page_age; +}; + +void wsr_init_lruvec(struct lruvec *lruvec); +void wsr_destroy_lruvec(struct lruvec *lruvec); +void wsr_init_pgdat(struct pglist_data *pgdat); +void wsr_destroy_pgdat(struct pglist_data *pgdat); +void wsr_init_sysfs(struct node *node); +void wsr_remove_sysfs(struct node *node); + +/* + * Returns true if the wsr is configured to be refreshed. + * The next refresh time is stored in refresh_time. + */ +bool wsr_refresh_report(struct wsr_state *wsr, struct mem_cgroup *root, + struct pglist_data *pgdat); +#else +static inline void wsr_init_lruvec(struct lruvec *lruvec) +{ +} +static inline void wsr_destroy_lruvec(struct lruvec *lruvec) +{ +} +static inline void wsr_init_pgdat(struct pglist_data *pgdat) +{ +} +static inline void wsr_destroy_pgdat(struct pglist_data *pgdat) +{ +} +static inline void wsr_init_sysfs(struct node *node) +{ +} +static inline void wsr_remove_sysfs(struct node *node) +{ +} +#endif /* CONFIG_WORKINGSET_REPORT */ + +#endif /* _LINUX_WORKINGSET_REPORT_H */ diff --git a/mm/Kconfig b/mm/Kconfig index ffc3a2ba3a8c..212f203b10b9 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1261,6 +1261,15 @@ config LOCK_MM_AND_FIND_VMA config IOMMU_MM_DATA bool +config WORKINGSET_REPORT + bool "Working set reporting" + depends on LRU_GEN && SYSFS + help + Report system and per-memcg working set to userspace. + + This option exports stats and events giving the user more insight + into its memory working set. + source "mm/damon/Kconfig" endmenu diff --git a/mm/Makefile b/mm/Makefile index e4b5b75aaec9..57093657030d 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -92,6 +92,7 @@ obj-$(CONFIG_DEVICE_MIGRATION) += migrate_device.o obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += huge_memory.o khugepaged.o obj-$(CONFIG_PAGE_COUNTER) += page_counter.o obj-$(CONFIG_MEMCG) += memcontrol.o vmpressure.o +obj-$(CONFIG_WORKINGSET_REPORT) += workingset_report.o ifdef CONFIG_SWAP obj-$(CONFIG_MEMCG) += swap_cgroup.o endif diff --git a/mm/internal.h b/mm/internal.h index f309a010d50f..5e0caba64ee4 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -198,12 +198,21 @@ extern unsigned long highest_memmap_pfn; /* * in mm/vmscan.c: */ +struct scan_control; bool isolate_lru_page(struct page *page); bool folio_isolate_lru(struct folio *folio); void putback_lru_page(struct page *page); void folio_putback_lru(struct folio *folio); extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason); +#ifdef CONFIG_WORKINGSET_REPORT +/* + * in mm/wsr.c + */ +/* Requires wsr->page_age_lock held */ +void wsr_refresh_scan(struct lruvec *lruvec); +#endif + /* * in mm/rmap.c: */ diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1ed40f9d3a27..b5b67c93c287 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -65,6 +65,7 @@ #include #include #include +#include #include "internal.h" #include #include @@ -5457,6 +5458,7 @@ static void free_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node) if (!pn) return; + wsr_destroy_lruvec(&pn->lruvec); free_percpu(pn->lruvec_stats_percpu); kfree(pn); } diff --git a/mm/mm_init.c b/mm/mm_init.c index 2c19f5515e36..c741c3f1e3db 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -27,6 +27,7 @@ #include #include #include +#include #include "internal.h" #include "slab.h" #include "shuffle.h" @@ -1368,6 +1369,7 @@ static void __meminit pgdat_init_internals(struct pglist_data *pgdat) pgdat_page_ext_init(pgdat); lruvec_init(&pgdat->__lruvec); + wsr_init_pgdat(pgdat); } static void __meminit zone_init_internals(struct zone *zone, enum zone_type idx, int nid, diff --git a/mm/mmzone.c b/mm/mmzone.c index c01896eca736..477cd5ac1d78 100644 --- a/mm/mmzone.c +++ b/mm/mmzone.c @@ -90,6 +90,8 @@ void lruvec_init(struct lruvec *lruvec) */ list_del(&lruvec->lists[LRU_UNEVICTABLE]); + wsr_init_lruvec(lruvec); + lru_gen_init_lruvec(lruvec); } diff --git a/mm/vmscan.c b/mm/vmscan.c index 1a7c7d537db6..9af6793a6534 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -56,6 +56,7 @@ #include #include #include +#include #include #include @@ -5606,6 +5607,8 @@ static int __init init_lru_gen(void) if (sysfs_create_group(mm_kobj, &lru_gen_attr_group)) pr_err("lru_gen: failed to create sysfs group\n"); + wsr_init_sysfs(NULL); + debugfs_create_file("lru_gen", 0644, NULL, NULL, &lru_gen_rw_fops); debugfs_create_file("lru_gen_full", 0444, NULL, NULL, &lru_gen_ro_fops); @@ -5613,6 +5616,35 @@ static int __init init_lru_gen(void) }; late_initcall(init_lru_gen); +/****************************************************************************** + * workingset reporting + ******************************************************************************/ +#ifdef CONFIG_WORKINGSET_REPORT +void wsr_refresh_scan(struct lruvec *lruvec) +{ + DEFINE_MAX_SEQ(lruvec); + struct scan_control sc = { + .may_writepage = true, + .may_unmap = true, + .may_swap = true, + .proactive = true, + .reclaim_idx = MAX_NR_ZONES - 1, + .gfp_mask = GFP_KERNEL, + }; + unsigned int flags; + + set_task_reclaim_state(current, &sc.reclaim_state); + flags = memalloc_noreclaim_save(); + /* + * setting can_swap=true and force_scan=true ensures + * proper workingset stats when the system cannot swap. + */ + try_to_inc_max_seq(lruvec, max_seq, &sc, true, true); + memalloc_noreclaim_restore(flags); + set_task_reclaim_state(current, NULL); +} +#endif /* CONFIG_WORKINGSET_REPORT */ + #else /* !CONFIG_LRU_GEN */ static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc) diff --git a/mm/workingset_report.c b/mm/workingset_report.c new file mode 100644 index 000000000000..7b872b9fa7da --- /dev/null +++ b/mm/workingset_report.c @@ -0,0 +1,438 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "internal.h" + +void wsr_init_pgdat(struct pglist_data *pgdat) +{ + mutex_init(&pgdat->wsr_update_mutex); + RCU_INIT_POINTER(pgdat->wsr_page_age_bins, NULL); +} + +void wsr_destroy_pgdat(struct pglist_data *pgdat) +{ + struct wsr_report_bins __rcu *bins; + + mutex_lock(&pgdat->wsr_update_mutex); + bins = rcu_replace_pointer(pgdat->wsr_page_age_bins, NULL, + lockdep_is_held(&pgdat->wsr_update_mutex)); + kfree_rcu(bins, rcu); + mutex_unlock(&pgdat->wsr_update_mutex); + mutex_destroy(&pgdat->wsr_update_mutex); +} + +void wsr_init_lruvec(struct lruvec *lruvec) +{ + struct wsr_state *wsr = &lruvec->wsr; + + memset(wsr, 0, sizeof(*wsr)); + mutex_init(&wsr->page_age_lock); +} + +void wsr_destroy_lruvec(struct lruvec *lruvec) +{ + struct wsr_state *wsr = &lruvec->wsr; + + mutex_destroy(&wsr->page_age_lock); + kfree(wsr->page_age); + memset(wsr, 0, sizeof(*wsr)); +} + +static int workingset_report_intervals_parse(char *src, + struct wsr_report_bins *bins) +{ + int err = 0, i = 0; + char *cur, *next = strim(src); + + if (*next == '\0') + return 0; + + while ((cur = strsep(&next, ","))) { + unsigned int interval; + + err = kstrtouint(cur, 0, &interval); + if (err) + goto out; + + bins->idle_age[i] = msecs_to_jiffies(interval); + if (i > 0 && bins->idle_age[i] <= bins->idle_age[i - 1]) { + err = -EINVAL; + goto out; + } + + if (++i == WORKINGSET_REPORT_MAX_NR_BINS) { + err = -ERANGE; + goto out; + } + } + + if (i && i < WORKINGSET_REPORT_MIN_NR_BINS - 1) { + err = -ERANGE; + goto out; + } + + bins->nr_bins = i; + bins->idle_age[i] = WORKINGSET_INTERVAL_MAX; +out: + return err ?: i; +} + +static unsigned long get_gen_start_time(const struct lru_gen_folio *lrugen, + unsigned long seq, + unsigned long max_seq, + unsigned long curr_timestamp) +{ + int younger_gen; + + if (seq == max_seq) + return curr_timestamp; + younger_gen = lru_gen_from_seq(seq + 1); + return READ_ONCE(lrugen->timestamps[younger_gen]); +} + +static void collect_page_age_type(const struct lru_gen_folio *lrugen, + struct wsr_report_bin *bin, + unsigned long max_seq, unsigned long min_seq, + unsigned long curr_timestamp, int type) +{ + unsigned long seq; + + for (seq = max_seq; seq + 1 > min_seq; seq--) { + int gen, zone; + unsigned long gen_end, gen_start, size = 0; + + gen = lru_gen_from_seq(seq); + + for (zone = 0; zone < MAX_NR_ZONES; zone++) + size += max( + READ_ONCE(lrugen->nr_pages[gen][type][zone]), + 0L); + + gen_start = get_gen_start_time(lrugen, seq, max_seq, + curr_timestamp); + gen_end = READ_ONCE(lrugen->timestamps[gen]); + + while (bin->idle_age != WORKINGSET_INTERVAL_MAX && + time_before(gen_end + bin->idle_age, curr_timestamp)) { + unsigned long gen_in_bin = (long)gen_start - + (long)curr_timestamp + + (long)bin->idle_age; + unsigned long gen_len = (long)gen_start - (long)gen_end; + + if (!gen_len) + break; + if (gen_in_bin) { + unsigned long split_bin = + size / gen_len * gen_in_bin; + + bin->nr_pages[type] += split_bin; + size -= split_bin; + } + gen_start = curr_timestamp - bin->idle_age; + bin++; + } + bin->nr_pages[type] += size; + } +} + +/* + * proportionally aggregate Multi-gen LRU bins into a working set report + * MGLRU generations: + * current time + * | max_seq timestamp + * | | max_seq - 1 timestamp + * | | | unbounded + * | | | | + * -------------------------------- + * | max_seq | ... | ... | min_seq + * -------------------------------- + * + * Bins: + * + * current time + * | current - idle_age[0] + * | | current - idle_age[1] + * | | | unbounded + * | | | | + * ------------------------------ + * | bin 0 | ... | ... | bin n-1 + * ------------------------------ + * + * Assume the heuristic that pages are in the MGLRU generation + * through uniform accesses, so we can aggregate them + * proportionally into bins. + */ +static void collect_page_age(struct wsr_page_age_histo *page_age, + const struct lruvec *lruvec) +{ + int type; + const struct lru_gen_folio *lrugen = &lruvec->lrugen; + unsigned long curr_timestamp = jiffies; + unsigned long max_seq = READ_ONCE((lruvec)->lrugen.max_seq); + unsigned long min_seq[ANON_AND_FILE] = { + READ_ONCE(lruvec->lrugen.min_seq[LRU_GEN_ANON]), + READ_ONCE(lruvec->lrugen.min_seq[LRU_GEN_FILE]), + }; + struct wsr_report_bin *bin = &page_age->bins[0]; + + for (type = 0; type < ANON_AND_FILE; type++) + collect_page_age_type(lrugen, bin, max_seq, min_seq[type], + curr_timestamp, type); +} + +/* First step: hierarchically scan child memcgs. */ +static void refresh_scan(struct wsr_state *wsr, struct mem_cgroup *root, + struct pglist_data *pgdat) +{ + struct mem_cgroup *memcg; + + memcg = mem_cgroup_iter(root, NULL, NULL); + do { + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); + + wsr_refresh_scan(lruvec); + cond_resched(); + } while ((memcg = mem_cgroup_iter(root, memcg, NULL))); +} + +/* Second step: aggregate child memcgs into the page age histogram. */ +static void refresh_aggregate(struct wsr_page_age_histo *page_age, + struct mem_cgroup *root, + struct pglist_data *pgdat) +{ + struct mem_cgroup *memcg; + struct wsr_report_bin *bin; + + for (bin = page_age->bins; + bin->idle_age != WORKINGSET_INTERVAL_MAX; bin++) { + bin->nr_pages[0] = 0; + bin->nr_pages[1] = 0; + } + /* the last used bin has idle_age == WORKINGSET_INTERVAL_MAX. */ + bin->nr_pages[0] = 0; + bin->nr_pages[1] = 0; + + memcg = mem_cgroup_iter(root, NULL, NULL); + do { + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); + + collect_page_age(page_age, lruvec); + cond_resched(); + } while ((memcg = mem_cgroup_iter(root, memcg, NULL))); + WRITE_ONCE(page_age->timestamp, jiffies); +} + +static void copy_node_bins(struct pglist_data *pgdat, + struct wsr_page_age_histo *page_age) +{ + struct wsr_report_bins *node_page_age_bins; + int i = 0; + + rcu_read_lock(); + node_page_age_bins = rcu_dereference(pgdat->wsr_page_age_bins); + if (!node_page_age_bins) + goto nocopy; + for (i = 0; i < node_page_age_bins->nr_bins; ++i) + page_age->bins[i].idle_age = node_page_age_bins->idle_age[i]; + +nocopy: + page_age->bins[i].idle_age = WORKINGSET_INTERVAL_MAX; + rcu_read_unlock(); +} + +bool wsr_refresh_report(struct wsr_state *wsr, struct mem_cgroup *root, + struct pglist_data *pgdat) +{ + struct wsr_page_age_histo *page_age; + + if (!READ_ONCE(wsr->page_age)) + return false; + + refresh_scan(wsr, root, pgdat); + mutex_lock(&wsr->page_age_lock); + page_age = READ_ONCE(wsr->page_age); + if (page_age) { + copy_node_bins(pgdat, page_age); + refresh_aggregate(page_age, root, pgdat); + } + mutex_unlock(&wsr->page_age_lock); + return !!page_age; +} +EXPORT_SYMBOL_GPL(wsr_refresh_report); + +static struct pglist_data *kobj_to_pgdat(struct kobject *kobj) +{ + int nid = IS_ENABLED(CONFIG_NUMA) ? kobj_to_dev(kobj)->id : + first_memory_node; + + return NODE_DATA(nid); +} + +static struct wsr_state *kobj_to_wsr(struct kobject *kobj) +{ + return &mem_cgroup_lruvec(NULL, kobj_to_pgdat(kobj))->wsr; +} + +static ssize_t page_age_intervals_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + struct wsr_report_bins *bins; + int len = 0; + struct pglist_data *pgdat = kobj_to_pgdat(kobj); + + rcu_read_lock(); + bins = rcu_dereference(pgdat->wsr_page_age_bins); + if (bins) { + int i; + int nr_bins = bins->nr_bins; + + for (i = 0; i < bins->nr_bins; ++i) { + len += sysfs_emit_at( + buf, len, "%u", + jiffies_to_msecs(bins->idle_age[i])); + if (i + 1 < nr_bins) + len += sysfs_emit_at(buf, len, ","); + } + } + len += sysfs_emit_at(buf, len, "\n"); + rcu_read_unlock(); + + return len; +} + +static ssize_t page_age_intervals_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *src, size_t len) +{ + struct wsr_report_bins *bins = NULL, __rcu *old; + char *buf = NULL; + int err = 0; + struct pglist_data *pgdat = kobj_to_pgdat(kobj); + + buf = kstrdup(src, GFP_KERNEL); + if (!buf) { + err = -ENOMEM; + goto failed; + } + + bins = + kzalloc(sizeof(struct wsr_report_bins), GFP_KERNEL); + + if (!bins) { + err = -ENOMEM; + goto failed; + } + + err = workingset_report_intervals_parse(buf, bins); + if (err < 0) + goto failed; + + if (err == 0) { + kfree(bins); + bins = NULL; + } + + mutex_lock(&pgdat->wsr_update_mutex); + old = rcu_replace_pointer(pgdat->wsr_page_age_bins, bins, + lockdep_is_held(&pgdat->wsr_update_mutex)); + mutex_unlock(&pgdat->wsr_update_mutex); + kfree_rcu(old, rcu); + kfree(buf); + return len; +failed: + kfree(bins); + kfree(buf); + + return err; +} + +static struct kobj_attribute page_age_intervals_attr = + __ATTR_RW(page_age_intervals); + +static ssize_t page_age_show(struct kobject *kobj, struct kobj_attribute *attr, + char *buf) +{ + struct wsr_report_bin *bin; + int ret = 0; + struct wsr_state *wsr = kobj_to_wsr(kobj); + + + mutex_lock(&wsr->page_age_lock); + if (!wsr->page_age) + wsr->page_age = + kzalloc(sizeof(struct wsr_page_age_histo), GFP_KERNEL); + mutex_unlock(&wsr->page_age_lock); + + wsr_refresh_report(wsr, NULL, kobj_to_pgdat(kobj)); + + mutex_lock(&wsr->page_age_lock); + if (!wsr->page_age) + goto unlock; + for (bin = wsr->page_age->bins; + bin->idle_age != WORKINGSET_INTERVAL_MAX; bin++) + ret += sysfs_emit_at(buf, ret, "%u anon=%lu file=%lu\n", + jiffies_to_msecs(bin->idle_age), + bin->nr_pages[0] * PAGE_SIZE, + bin->nr_pages[1] * PAGE_SIZE); + + ret += sysfs_emit_at(buf, ret, "%lu anon=%lu file=%lu\n", + WORKINGSET_INTERVAL_MAX, + bin->nr_pages[0] * PAGE_SIZE, + bin->nr_pages[1] * PAGE_SIZE); + +unlock: + mutex_unlock(&wsr->page_age_lock); + return ret; +} + +static struct kobj_attribute page_age_attr = __ATTR_RO(page_age); + +static struct attribute *workingset_report_attrs[] = { + &page_age_intervals_attr.attr, &page_age_attr.attr, NULL +}; + +static const struct attribute_group workingset_report_attr_group = { + .name = "workingset_report", + .attrs = workingset_report_attrs, +}; + +void wsr_init_sysfs(struct node *node) +{ + struct kobject *kobj = node ? &node->dev.kobj : mm_kobj; + struct wsr_state *wsr; + + if (IS_ENABLED(CONFIG_NUMA) && !node) + return; + + wsr = kobj_to_wsr(kobj); + + if (sysfs_create_group(kobj, &workingset_report_attr_group)) + pr_warn("Workingset report failed to create sysfs files\n"); +} +EXPORT_SYMBOL_GPL(wsr_init_sysfs); + +void wsr_remove_sysfs(struct node *node) +{ + struct kobject *kobj = &node->dev.kobj; + struct wsr_state *wsr; + + if (IS_ENABLED(CONFIG_NUMA) && !node) + return; + + wsr = kobj_to_wsr(kobj); + sysfs_remove_group(kobj, &workingset_report_attr_group); +} +EXPORT_SYMBOL_GPL(wsr_remove_sysfs); From patchwork Sat May 4 07:30:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuanchu Xie X-Patchwork-Id: 13653806 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E202171C2 for ; Sat, 4 May 2024 07:30:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714807853; cv=none; b=kQvPx71ucYXsLhou83P+Kbc+VCDRKz3vCZTiBFxA8z9OKMSXIOiMxUJb36T/yGh2IbmL5ZJ6SyHnz9F9KGHgKCdORaqIxrJvk8XCs74L3CoPmbyscK7nGjSD7ULASeRAn2JRtMByCuYBWapErI6UYQDYhX8NO8e3HWwExp9c9M8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714807853; c=relaxed/simple; bh=QoEQP71r0sLhLVdUCy47tqFkf/xeFd57hSMDzOdqrGw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=f1AeM2pnWb/9EbCo9R0IAK2eCMYPXvRBOXmzIKHJsprvyKyOhaUVdvb6cYjC6S4pxuVxiTIzq/ugo0BaNL7hdH3wifTChX+Q8AEr03nG3v6Ungw2HfJBVqXG4+FF+bjopbAkBHp7SblSySiAqpxfOMoxqI6YfEmRfptVxdu3WDw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yuanchu.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=HPv9KoG9; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yuanchu.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="HPv9KoG9" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dc6ceade361so1162627276.0 for ; Sat, 04 May 2024 00:30:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714807850; x=1715412650; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+esKTT8j37wn7GqAICF1gBoKkjcIdKDssnbLJgB8I3o=; b=HPv9KoG9buPkZHNDFkJhQC7Ws3qQBnw6osOrxOE6I2LXXdfWpKRSa06MfgVn2eLheJ vEYahYHa8cWjmNRNnK63pooiS9yQ+vtzmUTD6IUnE5l5/5RbILB5A5cOwMpXKDqE5CzK ZMKbP/BjCFu2CV1Fr9scxW+NOm1fIBivn5vmhW7tNLcC9zMmXbd9GvNHoFSf/r5DcTK2 rCRkZ1fJvL7exM21oLfhOMETD7+KQIK9MNVTC07jXwcO0grl1ak74nh0HO6LXBVm1SX7 tBY7cLxN+aMw7Nk9JOQmbnqHW8rht6tGxdTdk5y8eJFNfIb7v7s3lxjsFIK+QQ3xTmj7 +DuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714807850; x=1715412650; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+esKTT8j37wn7GqAICF1gBoKkjcIdKDssnbLJgB8I3o=; b=dE2fK/dEN4Sxmb4HmA4SACizCIGAeRhqnndUYZ1cRIyxkbGWyQvESg8Rn99rXAwBW5 ZW2N05RNImKx2sg7tcOkg0j7U/oC29bxhILcl9i4yU9pzIcEf/6dA0HcVAa6EAG+JbZ8 hRhNHGtrJTaQbijYyaex5BKUtXcuxgEsKgw7ueToPtFsThnl0ru/L8GPUjHvoG2fp/fV C9Ib5xEhifqC/S5yL+k433ffvpmYbZxPY4VCjv3N0CDErN+89PCOgkGKFQw7uyKrSEY4 Rdwf++PP1S9niywI5Ge9n6iUr9IlF7tXhXE/TGCka+T0Ck7ll/sfcglx4/lehXNZUaYg AAiA== X-Forwarded-Encrypted: i=1; AJvYcCWj64mBGo3B9BUxXSUPh2xM8kapwXBMi27aabnnaL6tp7bJcHEzb2ZTVBeCnL1TyE+Y92N+r6fBIpdGvATiZ6a/F6g1v0judPscpPBaDC7e X-Gm-Message-State: AOJu0Ywph8zO5YVeKMicSXpZ+H19m1KVFjOQMeuKZ3JzXR0D4yGfjySg HjO01fLdxp4zYR4fTDJt2He8vAyEzxFVaoRoOU2Ra7IqqNFDPUWhJP+AE9GLX3GsW8TkhqReFKm 4+8Qq9A== X-Google-Smtp-Source: AGHT+IGpqecW4RBq1KKdblvfWTC4PDXr5GnnaakPH7N839yUP3w23VFJCVQbib80PviIg8Tn4iBf4ov4rT3U X-Received: from yuanchu-desktop.svl.corp.google.com ([2620:15c:2a3:200:da8f:bd07:9977:eb21]) (user=yuanchu job=sendgmr) by 2002:a05:6902:c11:b0:de5:2b18:3b74 with SMTP id fs17-20020a0569020c1100b00de52b183b74mr1531852ybb.2.1714807850514; Sat, 04 May 2024 00:30:50 -0700 (PDT) Date: Sat, 4 May 2024 00:30:07 -0700 In-Reply-To: <20240504073011.4000534-1-yuanchu@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240504073011.4000534-1-yuanchu@google.com> X-Mailer: git-send-email 2.45.0.rc1.225.g2a3ae87e7f-goog Message-ID: <20240504073011.4000534-4-yuanchu@google.com> Subject: [PATCH v1 3/7] mm: use refresh interval to rate-limit workingset report aggregation From: Yuanchu Xie To: David Hildenbrand , "Aneesh Kumar K.V" , Khalid Aziz , Henry Huang , Yu Zhao , Dan Williams , Gregory Price , Huang Ying Cc: Kalesh Singh , Wei Xu , David Rientjes , Greg Kroah-Hartman , "Rafael J. Wysocki" , Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Shuah Khan , Yosry Ahmed , Matthew Wilcox , Sudarshan Rajagopalan , Kairui Song , "Michael S. Tsirkin" , Vasily Averin , Nhat Pham , Miaohe Lin , Qi Zheng , Abel Wu , "Vishal Moola (Oracle)" , Kefeng Wang , Yuanchu Xie , linux-kernel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kselftest@vger.kernel.org The refresh interval is a rate limiting factor to workingset page age histogram reads. When a workingset report is generated, a timestamp is noted, and the same report will be read until it expires beyond the refresh interval, at which point a new report is generated. Sysfs interface /sys/devices/system/node/nodeX/workingset_report/refresh_interval time in milliseconds specifying how long the report is valid for Signed-off-by: Yuanchu Xie --- include/linux/workingset_report.h | 1 + mm/internal.h | 2 +- mm/vmscan.c | 27 +++++++---- mm/workingset_report.c | 81 +++++++++++++++++++++++++------ 4 files changed, 85 insertions(+), 26 deletions(-) diff --git a/include/linux/workingset_report.h b/include/linux/workingset_report.h index d7c2ee14ec87..8bae6a600410 100644 --- a/include/linux/workingset_report.h +++ b/include/linux/workingset_report.h @@ -37,6 +37,7 @@ struct wsr_page_age_histo { }; struct wsr_state { + unsigned long refresh_interval; /* breakdown of workingset by page age */ struct mutex page_age_lock; struct wsr_page_age_histo *page_age; diff --git a/mm/internal.h b/mm/internal.h index 5e0caba64ee4..151f09c6983e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -210,7 +210,7 @@ extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason * in mm/wsr.c */ /* Requires wsr->page_age_lock held */ -void wsr_refresh_scan(struct lruvec *lruvec); +void wsr_refresh_scan(struct lruvec *lruvec, unsigned long refresh_interval); #endif /* diff --git a/mm/vmscan.c b/mm/vmscan.c index 9af6793a6534..b7293baac1dd 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -5620,7 +5620,7 @@ late_initcall(init_lru_gen); * workingset reporting ******************************************************************************/ #ifdef CONFIG_WORKINGSET_REPORT -void wsr_refresh_scan(struct lruvec *lruvec) +void wsr_refresh_scan(struct lruvec *lruvec, unsigned long refresh_interval) { DEFINE_MAX_SEQ(lruvec); struct scan_control sc = { @@ -5633,15 +5633,22 @@ void wsr_refresh_scan(struct lruvec *lruvec) }; unsigned int flags; - set_task_reclaim_state(current, &sc.reclaim_state); - flags = memalloc_noreclaim_save(); - /* - * setting can_swap=true and force_scan=true ensures - * proper workingset stats when the system cannot swap. - */ - try_to_inc_max_seq(lruvec, max_seq, &sc, true, true); - memalloc_noreclaim_restore(flags); - set_task_reclaim_state(current, NULL); + if (refresh_interval) { + int gen = lru_gen_from_seq(max_seq); + unsigned long birth = READ_ONCE(lruvec->lrugen.timestamps[gen]); + + if (time_is_before_jiffies(birth + refresh_interval)) { + set_task_reclaim_state(current, &sc.reclaim_state); + flags = memalloc_noreclaim_save(); + /* + * setting can_swap=true and force_scan=true ensures + * proper workingset stats when the system cannot swap. + */ + try_to_inc_max_seq(lruvec, max_seq, &sc, true, true); + memalloc_noreclaim_restore(flags); + set_task_reclaim_state(current, NULL); + } + } } #endif /* CONFIG_WORKINGSET_REPORT */ diff --git a/mm/workingset_report.c b/mm/workingset_report.c index 7b872b9fa7da..56155acbe7e9 100644 --- a/mm/workingset_report.c +++ b/mm/workingset_report.c @@ -195,7 +195,8 @@ static void collect_page_age(struct wsr_page_age_histo *page_age, /* First step: hierarchically scan child memcgs. */ static void refresh_scan(struct wsr_state *wsr, struct mem_cgroup *root, - struct pglist_data *pgdat) + struct pglist_data *pgdat, + unsigned long refresh_interval) { struct mem_cgroup *memcg; @@ -203,7 +204,7 @@ static void refresh_scan(struct wsr_state *wsr, struct mem_cgroup *root, do { struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); - wsr_refresh_scan(lruvec); + wsr_refresh_scan(lruvec, refresh_interval); cond_resched(); } while ((memcg = mem_cgroup_iter(root, memcg, NULL))); } @@ -257,17 +258,25 @@ bool wsr_refresh_report(struct wsr_state *wsr, struct mem_cgroup *root, struct pglist_data *pgdat) { struct wsr_page_age_histo *page_age; + unsigned long refresh_interval = READ_ONCE(wsr->refresh_interval); if (!READ_ONCE(wsr->page_age)) return false; - refresh_scan(wsr, root, pgdat); + if (!refresh_interval) + return false; + mutex_lock(&wsr->page_age_lock); page_age = READ_ONCE(wsr->page_age); - if (page_age) { - copy_node_bins(pgdat, page_age); - refresh_aggregate(page_age, root, pgdat); - } + if (!page_age) + goto unlock; + if (page_age->timestamp && + time_is_after_jiffies(page_age->timestamp + refresh_interval)) + goto unlock; + refresh_scan(wsr, root, pgdat, refresh_interval); + copy_node_bins(pgdat, page_age); + refresh_aggregate(page_age, root, pgdat); +unlock: mutex_unlock(&wsr->page_age_lock); return !!page_age; } @@ -286,6 +295,52 @@ static struct wsr_state *kobj_to_wsr(struct kobject *kobj) return &mem_cgroup_lruvec(NULL, kobj_to_pgdat(kobj))->wsr; } +static ssize_t refresh_interval_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + struct wsr_state *wsr = kobj_to_wsr(kobj); + unsigned int interval = READ_ONCE(wsr->refresh_interval); + + return sysfs_emit(buf, "%u\n", jiffies_to_msecs(interval)); +} + +static ssize_t refresh_interval_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *buf, size_t len) +{ + unsigned int interval; + int err; + struct wsr_state *wsr = kobj_to_wsr(kobj); + + err = kstrtouint(buf, 0, &interval); + if (err) + return err; + + mutex_lock(&wsr->page_age_lock); + if (interval && !wsr->page_age) { + struct wsr_page_age_histo *page_age = + kzalloc(sizeof(struct wsr_page_age_histo), GFP_KERNEL); + + if (!page_age) { + err = -ENOMEM; + goto unlock; + } + wsr->page_age = page_age; + } + if (!interval && wsr->page_age) { + kfree(wsr->page_age); + wsr->page_age = NULL; + } + + WRITE_ONCE(wsr->refresh_interval, msecs_to_jiffies(interval)); +unlock: + mutex_unlock(&wsr->page_age_lock); + return err ?: len; +} + +static struct kobj_attribute refresh_interval_attr = + __ATTR_RW(refresh_interval); + static ssize_t page_age_intervals_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { @@ -369,13 +424,6 @@ static ssize_t page_age_show(struct kobject *kobj, struct kobj_attribute *attr, int ret = 0; struct wsr_state *wsr = kobj_to_wsr(kobj); - - mutex_lock(&wsr->page_age_lock); - if (!wsr->page_age) - wsr->page_age = - kzalloc(sizeof(struct wsr_page_age_histo), GFP_KERNEL); - mutex_unlock(&wsr->page_age_lock); - wsr_refresh_report(wsr, NULL, kobj_to_pgdat(kobj)); mutex_lock(&wsr->page_age_lock); @@ -401,7 +449,10 @@ static ssize_t page_age_show(struct kobject *kobj, struct kobj_attribute *attr, static struct kobj_attribute page_age_attr = __ATTR_RO(page_age); static struct attribute *workingset_report_attrs[] = { - &page_age_intervals_attr.attr, &page_age_attr.attr, NULL + &refresh_interval_attr.attr, + &page_age_intervals_attr.attr, + &page_age_attr.attr, + NULL }; static const struct attribute_group workingset_report_attr_group = { From patchwork Sat May 4 07:30:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuanchu Xie X-Patchwork-Id: 13653807 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A1D517C9E for ; Sat, 4 May 2024 07:30:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714807854; cv=none; b=A7lC1I9jYH5tEdqCuYRt3wNAQVS3hoq5YjrOEuQCqT6rmZHExz2oEmqpeyWLhu6koJVZxYMSby/dLW8W6xWPZU0jW/zf2wLtXoGNrxDfEANXSQ472G0tD+v2JjNv5L2hQMoLvL17q3smrlcIbdGQtuopJkTJHX/zYPivC1y+yJE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714807854; c=relaxed/simple; bh=8BYM/0bwfvRiN/1tSxdyuXR2N0rM8FmGTxJpEOr2hEk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=XHm3EaRllJwlnvRrQvdzm2E5XjXbbrDChvJriXJGdc2vulvYICN1opbg3cHKVNPA8plzE++O/o8qyfL9AFy7OzOl4HADriU+QoEDA08Kc1hRFIRKRVgRF/SLQso7MPcibrFelJSrmrA0XKkwyPolAH+UtMkaWrOm+IfKsaqoZVE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yuanchu.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=tVr/XQmT; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yuanchu.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tVr/XQmT" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dc6b269686aso791441276.1 for ; Sat, 04 May 2024 00:30:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714807852; x=1715412652; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xaEmyCMslYMwuxpJOVLJZ4UUBBVY/3Xfmzid12imJ38=; b=tVr/XQmTSO43vov5hcBa+jICF9/V5HkyIoGvjGJ86xu0ixLHgupRkWVxJOoSoP18i9 94TcOD06goxs9eV40sZ3HQVefv+aD0SlbrpZ0FoJ7P7LkiG3nWUuHYPINQcfWHUfaIgh PUQDInFzLW2CLZ7EO0ASoiLOk9WGPOzwR7ggGmJKgJnNhr3KesmikmMpEu3PiAYkkoCc myQOXwiUqCGeQyV/r/YIODoIsajgtsvjfDi9J5ewmyquxXlQPkkc5Vtm3O72qNJD+Mh1 Rmo7kHMxQLjQB8VPPT6RiPwnj/mc8Tl8gzrGnCiABinQkKnOtJco8h5gsDi8fEDL1jra GU/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714807852; x=1715412652; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xaEmyCMslYMwuxpJOVLJZ4UUBBVY/3Xfmzid12imJ38=; b=wRvNBPGXTf5QXIprocDQmYFYllrvk2w484KxUEeHjKrdf8J2eY/a4xKRkdp13Lny7S 24C9mu2Iz4VkMKenKveB4qGDtZMyCH1x2IYs9//O5wuB3HBvEYn9BEZB7UK/3TtOWYGw tQW1zqcQz16cBpxsVOmgSLyYsehqJ7dIG7XfZeQ2qthN98UToUXauG6ijJj/PCv2xG0S 4OE9Mp/J3NHE0ao80KzBsczVKYvmWvnz8dilkBs+hhMdjpPgXuGkqW6l508ygMvZrkvE KD1UCM6lMr375iklMwM/nFB15cPMVS3IFHgaFGhJMKnvcqHUpVAU/IlxkHqwAcSiKhW8 325w== X-Forwarded-Encrypted: i=1; AJvYcCWkgtmfG4pm7nGMg3oxh/ym8lbXOgyj3SFP3F4FZRSR56Jncjki9wjskh3HWQf8aZACVfdCdrFAc+SoAIyOW4RlNC2VY5idfJX1LveQEm+T X-Gm-Message-State: AOJu0YydaYCzTDUzjSU4uRmAWyg6OstXp4tOaQA2D3g8VinF+gL0m61F 9DlRyLSzxMz1+tR3mOWBPL77Q0rQroMhLHGpxL1eOJN2BcfgUMU+cnWkAt0zNAfLtXShrCSCJJp bN01djw== X-Google-Smtp-Source: AGHT+IEgDXzTdXOaK7MXB2FveB9irK572Lq296aA+UDNA71AYoSKYZI3L/RyqxlpQ13Ivr5db64iti7/LGM0 X-Received: from yuanchu-desktop.svl.corp.google.com ([2620:15c:2a3:200:da8f:bd07:9977:eb21]) (user=yuanchu job=sendgmr) by 2002:a05:6902:c0b:b0:de5:3003:4b83 with SMTP id fs11-20020a0569020c0b00b00de530034b83mr681924ybb.8.1714807852560; Sat, 04 May 2024 00:30:52 -0700 (PDT) Date: Sat, 4 May 2024 00:30:08 -0700 In-Reply-To: <20240504073011.4000534-1-yuanchu@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240504073011.4000534-1-yuanchu@google.com> X-Mailer: git-send-email 2.45.0.rc1.225.g2a3ae87e7f-goog Message-ID: <20240504073011.4000534-5-yuanchu@google.com> Subject: [PATCH v1 4/7] mm: report workingset during memory pressure driven scanning From: Yuanchu Xie To: David Hildenbrand , "Aneesh Kumar K.V" , Khalid Aziz , Henry Huang , Yu Zhao , Dan Williams , Gregory Price , Huang Ying Cc: Kalesh Singh , Wei Xu , David Rientjes , Greg Kroah-Hartman , "Rafael J. Wysocki" , Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Shuah Khan , Yosry Ahmed , Matthew Wilcox , Sudarshan Rajagopalan , Kairui Song , "Michael S. Tsirkin" , Vasily Averin , Nhat Pham , Miaohe Lin , Qi Zheng , Abel Wu , "Vishal Moola (Oracle)" , Kefeng Wang , Yuanchu Xie , linux-kernel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kselftest@vger.kernel.org When a node reaches its low watermarks and wakes up kswapd, notify all userspace programs waiting on the workingset page age histogram of the memory pressure, so a userspace agent can read the workingset report in time and make policy decisions, such as logging, oom-killing, or migration. Sysfs interface: /sys/devices/system/node/nodeX/workingset_report/report_threshold time in milliseconds that specifies how often the userspace agent can be notified for node memory pressure. Signed-off-by: Yuanchu Xie --- include/linux/workingset_report.h | 4 +++ mm/internal.h | 6 +++++ mm/vmscan.c | 44 +++++++++++++++++++++++++++++++ mm/workingset_report.c | 43 +++++++++++++++++++++++++++++- 4 files changed, 96 insertions(+), 1 deletion(-) diff --git a/include/linux/workingset_report.h b/include/linux/workingset_report.h index 8bae6a600410..2ec8b927b200 100644 --- a/include/linux/workingset_report.h +++ b/include/linux/workingset_report.h @@ -37,7 +37,11 @@ struct wsr_page_age_histo { }; struct wsr_state { + unsigned long report_threshold; unsigned long refresh_interval; + + struct kernfs_node *page_age_sys_file; + /* breakdown of workingset by page age */ struct mutex page_age_lock; struct wsr_page_age_histo *page_age; diff --git a/mm/internal.h b/mm/internal.h index 151f09c6983e..36480c7ac0dd 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -209,8 +209,14 @@ extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason /* * in mm/wsr.c */ +void notify_workingset(struct mem_cgroup *memcg, struct pglist_data *pgdat); /* Requires wsr->page_age_lock held */ void wsr_refresh_scan(struct lruvec *lruvec, unsigned long refresh_interval); +#else +static inline void notify_workingset(struct mem_cgroup *memcg, + struct pglist_data *pgdat) +{ +} #endif /* diff --git a/mm/vmscan.c b/mm/vmscan.c index b7293baac1dd..1f11b252c15e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2535,6 +2535,15 @@ static bool can_age_anon_pages(struct pglist_data *pgdat, return can_demote(pgdat->node_id, sc); } +#ifdef CONFIG_WORKINGSET_REPORT +static void try_to_report_workingset(struct pglist_data *pgdat, struct scan_control *sc); +#else +static inline void try_to_report_workingset(struct pglist_data *pgdat, + struct scan_control *sc) +{ +} +#endif + #ifdef CONFIG_LRU_GEN #ifdef CONFIG_LRU_GEN_ENABLED @@ -3936,6 +3945,8 @@ static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc) if (!min_ttl || sc->order || sc->priority == DEF_PRIORITY) return; + try_to_report_workingset(pgdat, sc); + memcg = mem_cgroup_iter(NULL, NULL, NULL); do { struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); @@ -5650,6 +5661,36 @@ void wsr_refresh_scan(struct lruvec *lruvec, unsigned long refresh_interval) } } } + +static void try_to_report_workingset(struct pglist_data *pgdat, + struct scan_control *sc) +{ + struct mem_cgroup *memcg = sc->target_mem_cgroup; + struct wsr_state *wsr = &mem_cgroup_lruvec(memcg, pgdat)->wsr; + unsigned long threshold = READ_ONCE(wsr->report_threshold); + + if (sc->priority == DEF_PRIORITY) + return; + + if (!threshold) + return; + + if (!mutex_trylock(&wsr->page_age_lock)) + return; + + if (!wsr->page_age) { + mutex_unlock(&wsr->page_age_lock); + return; + } + + if (time_is_after_jiffies(wsr->page_age->timestamp + threshold)) { + mutex_unlock(&wsr->page_age_lock); + return; + } + + mutex_unlock(&wsr->page_age_lock); + notify_workingset(memcg, pgdat); +} #endif /* CONFIG_WORKINGSET_REPORT */ #else /* !CONFIG_LRU_GEN */ @@ -6177,6 +6218,9 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc) if (zone->zone_pgdat == last_pgdat) continue; last_pgdat = zone->zone_pgdat; + + if (!sc->proactive) + try_to_report_workingset(zone->zone_pgdat, sc); shrink_node(zone->zone_pgdat, sc); } diff --git a/mm/workingset_report.c b/mm/workingset_report.c index 56155acbe7e9..7dcf38525016 100644 --- a/mm/workingset_report.c +++ b/mm/workingset_report.c @@ -295,6 +295,33 @@ static struct wsr_state *kobj_to_wsr(struct kobject *kobj) return &mem_cgroup_lruvec(NULL, kobj_to_pgdat(kobj))->wsr; } +static ssize_t report_threshold_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + struct wsr_state *wsr = kobj_to_wsr(kobj); + unsigned int threshold = READ_ONCE(wsr->report_threshold); + + return sysfs_emit(buf, "%u\n", jiffies_to_msecs(threshold)); +} + +static ssize_t report_threshold_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *buf, size_t len) +{ + unsigned int threshold; + struct wsr_state *wsr = kobj_to_wsr(kobj); + + if (kstrtouint(buf, 0, &threshold)) + return -EINVAL; + + WRITE_ONCE(wsr->report_threshold, msecs_to_jiffies(threshold)); + + return len; +} + +static struct kobj_attribute report_threshold_attr = + __ATTR_RW(report_threshold); + static ssize_t refresh_interval_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { @@ -449,6 +476,7 @@ static ssize_t page_age_show(struct kobject *kobj, struct kobj_attribute *attr, static struct kobj_attribute page_age_attr = __ATTR_RO(page_age); static struct attribute *workingset_report_attrs[] = { + &report_threshold_attr.attr, &refresh_interval_attr.attr, &page_age_intervals_attr.attr, &page_age_attr.attr, @@ -470,8 +498,13 @@ void wsr_init_sysfs(struct node *node) wsr = kobj_to_wsr(kobj); - if (sysfs_create_group(kobj, &workingset_report_attr_group)) + if (sysfs_create_group(kobj, &workingset_report_attr_group)) { pr_warn("Workingset report failed to create sysfs files\n"); + return; + } + + wsr->page_age_sys_file = + kernfs_walk_and_get(kobj->sd, "workingset_report/page_age"); } EXPORT_SYMBOL_GPL(wsr_init_sysfs); @@ -484,6 +517,14 @@ void wsr_remove_sysfs(struct node *node) return; wsr = kobj_to_wsr(kobj); + kernfs_put(wsr->page_age_sys_file); sysfs_remove_group(kobj, &workingset_report_attr_group); } EXPORT_SYMBOL_GPL(wsr_remove_sysfs); + +void notify_workingset(struct mem_cgroup *memcg, struct pglist_data *pgdat) +{ + struct wsr_state *wsr = &mem_cgroup_lruvec(memcg, pgdat)->wsr; + + kernfs_notify(wsr->page_age_sys_file); +} From patchwork Sat May 4 07:30:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuanchu Xie X-Patchwork-Id: 13653808 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 87C8C1AACC for ; Sat, 4 May 2024 07:30:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714807857; cv=none; b=o6ZjpoSJHuWKKFkozl5JZewkS+ttT6TLRMcuiawQZltfQ1Xvf2HhCI5H0ab6BmaYYQvGE18INLbYBMiNNDGZ6HPwRaOc6zEXSFNoicRfnkzL5+1U4O2+C+CWf3Pl+rpQG/rcDA6hzXFLrbhPYiyzVDccLGor2CTq+cmjDoGya8c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714807857; c=relaxed/simple; bh=MSoymEmtXwkbG+6gY++euyvnqVcaVC3DoNV/ayFTRfs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=pHiQVU+3nqifPe6rxMZyDFceSm/3wuQsPhYlSnQj7b52Ymwzgt4bx+YlIJBINPk03g3odRzsfXNnyc66lZzyni7edusIrKW8nMN6lj51Ef8o/Z5auf2Ldwf+ROxihzwWEC6w+p4PIcZd8tWJKrTM6Vl2YFAnrNc86ttq0yuu6Ac= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yuanchu.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2l1wn0ax; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yuanchu.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2l1wn0ax" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-de54ccab44aso917788276.3 for ; Sat, 04 May 2024 00:30:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714807854; x=1715412654; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OLxnf5SNOBvzXGjywTGAaG+Rg/LHUrvR7uH9tSjT/P0=; b=2l1wn0axjtsq5mBfXMrnHtyYOaqUFpvjrowNAI3N7f3N6ctw1Qn+nip3Gb+moA66s7 7sM4UB+81hjXu3SVZeUURlZO2Uhrti4X6QCtjynLPUI/cLfiyn3esSKykq9qFytg6+wA hs/xcgExL4+f/g/BX0yOP4busbigwusRXWPDJ/WRUmc8go7DFPmI+WrNQi7RWTtgz4tR C41W9Db5G+/vo8ruc+hgHvWlMXkB3+Wwje7zWBG5kJok8NPYfk4Sozgr9tFzWnVVZt6I 9cgwfJI8NOo61GnFr37IroSAImjnyXUng2tW/lovMCbuEwYyO2663Qp7oXD75Mye4Mvy oOGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714807854; x=1715412654; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OLxnf5SNOBvzXGjywTGAaG+Rg/LHUrvR7uH9tSjT/P0=; b=llm5riPNsf/i5K3g7Hfk17h/Z8jJR6ul4nAwcyVmN/XdOtoGbxWdGK99GhNjFyY+kN +4BqCYdYGjCLDJAPpxwk3zdPAi3t2CC5HBzwmXBJ7rt4gTzjzVmOMjRaqYXFZ6YGybEQ 6ogK4WxtkuKSy7LfpkpeYp96bcR+rU074sgFmF0ngJY3jkUgSqP3e/hgM9lnkDafjix/ M8uw4coepBwCiFy/mH4woOQSYFtOIukonU0KdtTolrmvi+MqIiXqRPFVhTC3sZhE/w5F uFcVdLfxjqkpMniiqer7Pnq+8Fto9PP2Mqa/ACQvr3Cjn4GMgE/2vZ5yISdT1FcHJoer 1pYw== X-Forwarded-Encrypted: i=1; AJvYcCX7i72jaagHMf+EBbbAh02xIme/2I13DZsKBha3TtUy+47KgPqnR/e9hdYOh2MIcBuwjULRqkokw71vioLuUNJSCsQ78fkIUC8NoumY9VGX X-Gm-Message-State: AOJu0Yxd6rOu+p73kGa8aG2BhpdcmVgG6fCfaFdH1SHxgumH/0L8TWz8 zp+Ja2c30ey7NXrULElaEqWTqGxA19YhvFFaOPFvUt3ppnrp+X2pDchhAtZdN0N9BX8WyjcPQDU KTLMlSA== X-Google-Smtp-Source: AGHT+IFMWIH0/B+L30esndwBVP4G+01F+1D+FQUWzGkdOFH/SC8SF749caf3CDFWxV8mCGfLO0eXl3n/4iIt X-Received: from yuanchu-desktop.svl.corp.google.com ([2620:15c:2a3:200:da8f:bd07:9977:eb21]) (user=yuanchu job=sendgmr) by 2002:a05:6902:1146:b0:de5:2325:72a1 with SMTP id p6-20020a056902114600b00de5232572a1mr1491589ybu.4.1714807854549; Sat, 04 May 2024 00:30:54 -0700 (PDT) Date: Sat, 4 May 2024 00:30:09 -0700 In-Reply-To: <20240504073011.4000534-1-yuanchu@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240504073011.4000534-1-yuanchu@google.com> X-Mailer: git-send-email 2.45.0.rc1.225.g2a3ae87e7f-goog Message-ID: <20240504073011.4000534-6-yuanchu@google.com> Subject: [PATCH v1 5/7] mm: extend working set reporting to memcgs From: Yuanchu Xie To: David Hildenbrand , "Aneesh Kumar K.V" , Khalid Aziz , Henry Huang , Yu Zhao , Dan Williams , Gregory Price , Huang Ying Cc: Kalesh Singh , Wei Xu , David Rientjes , Greg Kroah-Hartman , "Rafael J. Wysocki" , Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Shuah Khan , Yosry Ahmed , Matthew Wilcox , Sudarshan Rajagopalan , Kairui Song , "Michael S. Tsirkin" , Vasily Averin , Nhat Pham , Miaohe Lin , Qi Zheng , Abel Wu , "Vishal Moola (Oracle)" , Kefeng Wang , Yuanchu Xie , linux-kernel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kselftest@vger.kernel.org Break down the system-wide working set reporting into per-memcg reports, which aggregages its children hierarchically. The per-node working set reporting histograms and refresh/report threshold files are presented as memcg files, showing a report containing all the nodes. The per-node page age interval is configurable in sysfs and not available per-memcg, while the refresh interval and report threshold are configured per-memcg. Memcg interface: /sys/fs/cgroup/.../memory.workingset.page_age The memcg equivalent of the sysfs workingset page age histogram, breaks down the workingset of this memcg and its children into page age intervals. Each node is prefixed with a node header and a newline. Non-proactive direct reclaim on this memcg can also wake up userspace agents that are waiting on this file. e.g. N0 1000 anon=0 file=0 2000 anon=0 file=0 3000 anon=0 file=0 4000 anon=0 file=0 5000 anon=0 file=0 18446744073709551615 anon=0 file=0 /sys/fs/cgroup/.../memory.workingset.refresh_interval The memcg equivalent of the sysfs refresh interval. A per-node number of how much time a page age histogram is valid for, in milliseconds. e.g. echo N0=2000 > memory.workingset.refresh_interval /sys/fs/cgroup/.../memory.workingset.report_threshold The memcg equivalent of the sysfs report threshold. A per-node number of how often userspace agent waiting on the page age histogram can be woken up, in milliseconds. e.g. echo N0=1000 > memory.workingset.report_threshold Signed-off-by: Yuanchu Xie --- include/linux/memcontrol.h | 5 + include/linux/workingset_report.h | 6 +- mm/internal.h | 2 + mm/memcontrol.c | 178 +++++++++++++++++++++++++++++- mm/workingset_report.c | 12 +- 5 files changed, 198 insertions(+), 5 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 20ff87f8e001..7d7bc0928961 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -335,6 +335,11 @@ struct mem_cgroup { struct lru_gen_mm_list mm_list; #endif +#ifdef CONFIG_WORKINGSET_REPORT + /* memory.workingset.page_age file */ + struct cgroup_file workingset_page_age_file; +#endif + struct mem_cgroup_per_node *nodeinfo[]; }; diff --git a/include/linux/workingset_report.h b/include/linux/workingset_report.h index 2ec8b927b200..ae412d408037 100644 --- a/include/linux/workingset_report.h +++ b/include/linux/workingset_report.h @@ -9,6 +9,7 @@ struct mem_cgroup; struct pglist_data; struct node; struct lruvec; +struct cgroup_file; #ifdef CONFIG_WORKINGSET_REPORT @@ -40,7 +41,10 @@ struct wsr_state { unsigned long report_threshold; unsigned long refresh_interval; - struct kernfs_node *page_age_sys_file; + union { + struct kernfs_node *page_age_sys_file; + struct cgroup_file *page_age_cgroup_file; + }; /* breakdown of workingset by page age */ struct mutex page_age_lock; diff --git a/mm/internal.h b/mm/internal.h index 36480c7ac0dd..3730c8399ad4 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -212,6 +212,8 @@ extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason void notify_workingset(struct mem_cgroup *memcg, struct pglist_data *pgdat); /* Requires wsr->page_age_lock held */ void wsr_refresh_scan(struct lruvec *lruvec, unsigned long refresh_interval); +int workingset_report_intervals_parse(char *src, + struct wsr_report_bins *bins); #else static inline void notify_workingset(struct mem_cgroup *memcg, struct pglist_data *pgdat) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index b5b67c93c287..c6c0d2772279 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -7005,6 +7005,162 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf, return nbytes; } +#ifdef CONFIG_WORKINGSET_REPORT +static int memory_ws_refresh_interval_show(struct seq_file *m, void *v) +{ + int nid; + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); + + for_each_node_state(nid, N_MEMORY) { + struct wsr_state *wsr = + &mem_cgroup_lruvec(memcg, NODE_DATA(nid))->wsr; + + seq_printf(m, "N%d=%u ", nid, + jiffies_to_msecs(READ_ONCE(wsr->refresh_interval))); + } + seq_putc(m, '\n'); + + return 0; +} + +static ssize_t memory_wsr_threshold_parse(char *buf, size_t nbytes, + unsigned int *nid_out, + unsigned int *msecs) +{ + char *node, *threshold; + unsigned int nid; + int err; + + buf = strstrip(buf); + threshold = buf; + node = strsep(&threshold, "="); + + if (*node != 'N') + return -EINVAL; + + err = kstrtouint(node + 1, 0, &nid); + if (err) + return err; + + if (nid >= nr_node_ids || !node_state(nid, N_MEMORY)) + return -EINVAL; + + err = kstrtouint(threshold, 0, msecs); + if (err) + return err; + + *nid_out = nid; + + return nbytes; +} + +static ssize_t memory_ws_refresh_interval_write(struct kernfs_open_file *of, + char *buf, size_t nbytes, + loff_t off) +{ + unsigned int nid, msecs; + struct wsr_state *wsr; + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); + ssize_t ret = memory_wsr_threshold_parse(buf, nbytes, &nid, &msecs); + + if (ret < 0) + return ret; + + wsr = &mem_cgroup_lruvec(memcg, NODE_DATA(nid))->wsr; + + mutex_lock(&wsr->page_age_lock); + if (msecs && !wsr->page_age) { + struct wsr_page_age_histo *page_age = + kzalloc(sizeof(struct wsr_page_age_histo), GFP_KERNEL); + + if (!page_age) { + ret = -ENOMEM; + goto unlock; + } + wsr->page_age = page_age; + } + if (!msecs && wsr->page_age) { + kfree(wsr->page_age); + wsr->page_age = NULL; + } + + WRITE_ONCE(wsr->refresh_interval, msecs_to_jiffies(msecs)); +unlock: + mutex_unlock(&wsr->page_age_lock); + return ret; +} + +static int memory_ws_report_threshold_show(struct seq_file *m, void *v) +{ + int nid; + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); + + for_each_node_state(nid, N_MEMORY) { + struct wsr_state *wsr = + &mem_cgroup_lruvec(memcg, NODE_DATA(nid))->wsr; + + seq_printf(m, "N%d=%u ", nid, + jiffies_to_msecs(READ_ONCE(wsr->report_threshold))); + } + seq_putc(m, '\n'); + + return 0; +} + +static ssize_t memory_ws_report_threshold_write(struct kernfs_open_file *of, + char *buf, size_t nbytes, + loff_t off) +{ + unsigned int nid, msecs; + struct wsr_state *wsr; + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); + ssize_t ret = memory_wsr_threshold_parse(buf, nbytes, &nid, &msecs); + + if (ret < 0) + return ret; + + wsr = &mem_cgroup_lruvec(memcg, NODE_DATA(nid))->wsr; + WRITE_ONCE(wsr->report_threshold, msecs_to_jiffies(msecs)); + return ret; +} + +static int memory_ws_page_age_show(struct seq_file *m, void *v) +{ + int nid; + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); + + for_each_node_state(nid, N_MEMORY) { + struct wsr_state *wsr = + &mem_cgroup_lruvec(memcg, NODE_DATA(nid))->wsr; + struct wsr_report_bin *bin; + + if (!READ_ONCE(wsr->page_age)) + continue; + + wsr_refresh_report(wsr, memcg, NODE_DATA(nid)); + mutex_lock(&wsr->page_age_lock); + if (!wsr->page_age) + goto unlock; + seq_printf(m, "N%d\n", nid); + for (bin = wsr->page_age->bins; + bin->idle_age != WORKINGSET_INTERVAL_MAX; bin++) + seq_printf(m, "%u anon=%lu file=%lu\n", + jiffies_to_msecs(bin->idle_age), + bin->nr_pages[0] * PAGE_SIZE, + bin->nr_pages[1] * PAGE_SIZE); + + seq_printf(m, "%lu anon=%lu file=%lu\n", WORKINGSET_INTERVAL_MAX, + bin->nr_pages[0] * PAGE_SIZE, + bin->nr_pages[1] * PAGE_SIZE); + +unlock: + mutex_unlock(&wsr->page_age_lock); + } + + return 0; +} +#endif + static struct cftype memory_files[] = { { .name = "current", @@ -7073,7 +7229,27 @@ static struct cftype memory_files[] = { .flags = CFTYPE_NS_DELEGATABLE, .write = memory_reclaim, }, - { } /* terminate */ +#ifdef CONFIG_WORKINGSET_REPORT + { + .name = "workingset.refresh_interval", + .flags = CFTYPE_NOT_ON_ROOT | CFTYPE_NS_DELEGATABLE, + .seq_show = memory_ws_refresh_interval_show, + .write = memory_ws_refresh_interval_write, + }, + { + .name = "workingset.report_threshold", + .flags = CFTYPE_NOT_ON_ROOT | CFTYPE_NS_DELEGATABLE, + .seq_show = memory_ws_report_threshold_show, + .write = memory_ws_report_threshold_write, + }, + { + .name = "workingset.page_age", + .flags = CFTYPE_NOT_ON_ROOT | CFTYPE_NS_DELEGATABLE, + .file_offset = offsetof(struct mem_cgroup, workingset_page_age_file), + .seq_show = memory_ws_page_age_show, + }, +#endif + {} /* terminate */ }; struct cgroup_subsys memory_cgrp_subsys = { diff --git a/mm/workingset_report.c b/mm/workingset_report.c index 7dcf38525016..5a9bf3ebb914 100644 --- a/mm/workingset_report.c +++ b/mm/workingset_report.c @@ -37,9 +37,12 @@ void wsr_destroy_pgdat(struct pglist_data *pgdat) void wsr_init_lruvec(struct lruvec *lruvec) { struct wsr_state *wsr = &lruvec->wsr; + struct mem_cgroup *memcg = lruvec_memcg(lruvec); memset(wsr, 0, sizeof(*wsr)); mutex_init(&wsr->page_age_lock); + if (memcg && !mem_cgroup_is_root(memcg)) + wsr->page_age_cgroup_file = &memcg->workingset_page_age_file; } void wsr_destroy_lruvec(struct lruvec *lruvec) @@ -51,8 +54,8 @@ void wsr_destroy_lruvec(struct lruvec *lruvec) memset(wsr, 0, sizeof(*wsr)); } -static int workingset_report_intervals_parse(char *src, - struct wsr_report_bins *bins) +int workingset_report_intervals_parse(char *src, + struct wsr_report_bins *bins) { int err = 0, i = 0; char *cur, *next = strim(src); @@ -526,5 +529,8 @@ void notify_workingset(struct mem_cgroup *memcg, struct pglist_data *pgdat) { struct wsr_state *wsr = &mem_cgroup_lruvec(memcg, pgdat)->wsr; - kernfs_notify(wsr->page_age_sys_file); + if (mem_cgroup_is_root(memcg)) + kernfs_notify(wsr->page_age_sys_file); + else + cgroup_file_notify(wsr->page_age_cgroup_file); } From patchwork Sat May 4 07:30:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuanchu Xie X-Patchwork-Id: 13653809 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2EEA2FBF0 for ; Sat, 4 May 2024 07:30:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714807859; cv=none; b=Ncvad+2cdUJoD/HfzFVQKkWGrxFJvCdojToYopNSZGgU9q0Md8R0jp5dnqc/p7cEdNlM0gYMt2zPZ0j91xk1AiymeAdhgcHibW4+zyhTClVWkJK4AOBxOYjLA7Jt3UQokmDuT0jLZWTz/F8yQWZoCmE3JpM6EYUAYVnbvBbrwMI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714807859; c=relaxed/simple; bh=behUX3jjuMClKmnueKgHMjdB/LO1/p0zjg69wS3+fro=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=J6gb0sbM02vRu3NB6yN74zJbZfyIAXLVyMv2w6s0lwPY/yCCzSany+bvoPIAickNy95rmuFwvdtJCKPaHFLHSQ7JzQjTR0FDep6acfLmmOCiFVcul6KAqjLqt5TfMI6YAuGSoVCxhbT2Mh7YR4P+3TEBjx1xXOp0xF3R4catmOw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yuanchu.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=4OXVLhDZ; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yuanchu.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="4OXVLhDZ" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-de617c7649dso904625276.0 for ; Sat, 04 May 2024 00:30:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714807856; x=1715412656; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rp6y1AtpkHfeXuTgpbngprvPTppmrZY5FskXOV8DSWk=; b=4OXVLhDZ6NRYxuizriSi6cA4LGUsT5EbRG6qIZnzqCIoO/0RXr7nelf6Nhi1kFwIxn DCDmsM4l3s/kEvUKtoTQuRXEG1MPUS52fpLTCnfmO3u8uKKBaASuAZCXTU/t8cgcCc5v qgCPYl8FRoxOMmI78zYJdGdyCBsV55IckaNZAvFYkXjbh3/qlOYZYPxgB7X4Fz0fsP/S NtPEC8F+9NFFJcyHqG3NcZfcxqNYkZlKwhJ1aiOn75zT2i12/pfuqr57uD9PHV4UWVmV OF/GrS+xIqA6ZzWZdKhRpHgtrEK/Da+BMcoqtUn6tsCYXEDPSJQmaqU5mUUyo4Ow9cmT 6JUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714807856; x=1715412656; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rp6y1AtpkHfeXuTgpbngprvPTppmrZY5FskXOV8DSWk=; b=VhDkaHgWGV3oMpVkr2q2pb7bsWtVAn1cI6ztYm5kKuBH+c0A4OcvOOzSrBzM25jV52 ozp3hKWULqJOBCEtLe+HhNzhP1iIti/jaORD3eD4olMCDeFEGWb5dAwtkShwrhh2iQSv 7NApgOiSa9wz8BnDR0RAjltRHf9dEpR8B8ih2SUWRFVDAG6+8ed2C72SyKniWTGZf3Sj hcFbE4+c4YFoDrk7gD58AZCOHsaiRSDqdyy/Jo/h1Uj8rA2el1ixAIlH2neBZvv6Ot6p oEQj1O5lpXJhjgePyl9c9LFJEzOOYLg0zxVm/a7t1RX/HH33N+NRcHPQSnDKxQ7LKJgM LgmQ== X-Forwarded-Encrypted: i=1; AJvYcCX4OOYtmTr+rofqAiw7qUffiouEfP3M0fZ8UJ9vDXzwbn9AFmepNtmfMmZOIRnIZh1SuEYJh25a/VKfof36pjQpP5uu5Cdr1NGUkl65ojgV X-Gm-Message-State: AOJu0YwufECeZUVt9ept27tYKvF9+UvQetYQHDaJ0E99Q11Vlf4S0kgI oY6QCIfANht0t9WSqlnC9pfLoLMa/AEmSE8W9Pk3DLsk5B91rK5PnHwVuhYTQDlv+D+JsRlhj5K Xyt7jog== X-Google-Smtp-Source: AGHT+IGU6CJeYeCi16Op/dGx3kAMymE/Mcfr7yxitn52cAsdXe0smsbTYAlXgGjRedM+9wSAtFV6UoqmPn5w X-Received: from yuanchu-desktop.svl.corp.google.com ([2620:15c:2a3:200:da8f:bd07:9977:eb21]) (user=yuanchu job=sendgmr) by 2002:a05:6902:1242:b0:de6:1603:2dd5 with SMTP id t2-20020a056902124200b00de616032dd5mr680541ybu.9.1714807856331; Sat, 04 May 2024 00:30:56 -0700 (PDT) Date: Sat, 4 May 2024 00:30:10 -0700 In-Reply-To: <20240504073011.4000534-1-yuanchu@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240504073011.4000534-1-yuanchu@google.com> X-Mailer: git-send-email 2.45.0.rc1.225.g2a3ae87e7f-goog Message-ID: <20240504073011.4000534-7-yuanchu@google.com> Subject: [PATCH v1 6/7] mm: add kernel aging thread for workingset reporting From: Yuanchu Xie To: David Hildenbrand , "Aneesh Kumar K.V" , Khalid Aziz , Henry Huang , Yu Zhao , Dan Williams , Gregory Price , Huang Ying Cc: Kalesh Singh , Wei Xu , David Rientjes , Greg Kroah-Hartman , "Rafael J. Wysocki" , Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Shuah Khan , Yosry Ahmed , Matthew Wilcox , Sudarshan Rajagopalan , Kairui Song , "Michael S. Tsirkin" , Vasily Averin , Nhat Pham , Miaohe Lin , Qi Zheng , Abel Wu , "Vishal Moola (Oracle)" , Kefeng Wang , Yuanchu Xie , linux-kernel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kselftest@vger.kernel.org For reliable and timely aging on memcgs, one has to read the page age histograms on time. A kernel thread makes it easier by aging memcgs with valid refresh_interval when they can be refreshed, and also reduces the latency of any userspace consumers of the page age histogram. The kerne aging thread is gated behind CONFIG_WORKINGSET_REPORT_AGING. Debugging stats may be added in the future for when aging cannot keep up with the configured refresh_interval. Signed-off-by: Yuanchu Xie --- include/linux/workingset_report.h | 11 ++- mm/Kconfig | 6 ++ mm/Makefile | 1 + mm/memcontrol.c | 8 +- mm/workingset_report.c | 15 +++- mm/workingset_report_aging.c | 127 ++++++++++++++++++++++++++++++ 6 files changed, 162 insertions(+), 6 deletions(-) create mode 100644 mm/workingset_report_aging.c diff --git a/include/linux/workingset_report.h b/include/linux/workingset_report.h index ae412d408037..9294023db5a8 100644 --- a/include/linux/workingset_report.h +++ b/include/linux/workingset_report.h @@ -63,7 +63,16 @@ void wsr_remove_sysfs(struct node *node); * The next refresh time is stored in refresh_time. */ bool wsr_refresh_report(struct wsr_state *wsr, struct mem_cgroup *root, - struct pglist_data *pgdat); + struct pglist_data *pgdat, unsigned long *refresh_time); + +#ifdef CONFIG_WORKINGSET_REPORT_AGING +void wsr_wakeup_aging_thread(void); +#else /* CONFIG_WORKINGSET_REPORT_AGING */ +static inline void wsr_wakeup_aging_thread(void) +{ +} +#endif /* CONFIG_WORKINGSET_REPORT_AGING */ + #else static inline void wsr_init_lruvec(struct lruvec *lruvec) { diff --git a/mm/Kconfig b/mm/Kconfig index 212f203b10b9..1e6aa1bd63f2 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1270,6 +1270,12 @@ config WORKINGSET_REPORT This option exports stats and events giving the user more insight into its memory working set. +config WORKINGSET_REPORT_AGING + bool "Workingset report kernel aging thread" + depends on WORKINGSET_REPORT + help + Performs aging on memcgs with their configured refresh intervals. + source "mm/damon/Kconfig" endmenu diff --git a/mm/Makefile b/mm/Makefile index 57093657030d..7caae7f2d6cf 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -93,6 +93,7 @@ obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += huge_memory.o khugepaged.o obj-$(CONFIG_PAGE_COUNTER) += page_counter.o obj-$(CONFIG_MEMCG) += memcontrol.o vmpressure.o obj-$(CONFIG_WORKINGSET_REPORT) += workingset_report.o +obj-$(CONFIG_WORKINGSET_REPORT_AGING) += workingset_report_aging.o ifdef CONFIG_SWAP obj-$(CONFIG_MEMCG) += swap_cgroup.o endif diff --git a/mm/memcontrol.c b/mm/memcontrol.c index c6c0d2772279..6ada26da6de6 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -7060,12 +7060,12 @@ static ssize_t memory_ws_refresh_interval_write(struct kernfs_open_file *of, { unsigned int nid, msecs; struct wsr_state *wsr; + unsigned long old_interval; struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); ssize_t ret = memory_wsr_threshold_parse(buf, nbytes, &nid, &msecs); if (ret < 0) return ret; - wsr = &mem_cgroup_lruvec(memcg, NODE_DATA(nid))->wsr; mutex_lock(&wsr->page_age_lock); @@ -7084,9 +7084,13 @@ static ssize_t memory_ws_refresh_interval_write(struct kernfs_open_file *of, wsr->page_age = NULL; } + old_interval = READ_ONCE(wsr->refresh_interval); WRITE_ONCE(wsr->refresh_interval, msecs_to_jiffies(msecs)); unlock: mutex_unlock(&wsr->page_age_lock); + if (ret > 0 && msecs && + (!old_interval || jiffies_to_msecs(old_interval) > msecs)) + wsr_wakeup_aging_thread(); return ret; } @@ -7137,7 +7141,7 @@ static int memory_ws_page_age_show(struct seq_file *m, void *v) if (!READ_ONCE(wsr->page_age)) continue; - wsr_refresh_report(wsr, memcg, NODE_DATA(nid)); + wsr_refresh_report(wsr, memcg, NODE_DATA(nid), NULL); mutex_lock(&wsr->page_age_lock); if (!wsr->page_age) goto unlock; diff --git a/mm/workingset_report.c b/mm/workingset_report.c index 5a9bf3ebb914..46bb9469d5b3 100644 --- a/mm/workingset_report.c +++ b/mm/workingset_report.c @@ -258,7 +258,7 @@ static void copy_node_bins(struct pglist_data *pgdat, } bool wsr_refresh_report(struct wsr_state *wsr, struct mem_cgroup *root, - struct pglist_data *pgdat) + struct pglist_data *pgdat, unsigned long *refresh_time) { struct wsr_page_age_histo *page_age; unsigned long refresh_interval = READ_ONCE(wsr->refresh_interval); @@ -275,10 +275,14 @@ bool wsr_refresh_report(struct wsr_state *wsr, struct mem_cgroup *root, goto unlock; if (page_age->timestamp && time_is_after_jiffies(page_age->timestamp + refresh_interval)) - goto unlock; + goto time; refresh_scan(wsr, root, pgdat, refresh_interval); copy_node_bins(pgdat, page_age); refresh_aggregate(page_age, root, pgdat); + +time: + if (refresh_time) + *refresh_time = page_age->timestamp + refresh_interval; unlock: mutex_unlock(&wsr->page_age_lock); return !!page_age; @@ -341,6 +345,7 @@ static ssize_t refresh_interval_store(struct kobject *kobj, unsigned int interval; int err; struct wsr_state *wsr = kobj_to_wsr(kobj); + unsigned long old_interval = 0; err = kstrtouint(buf, 0, &interval); if (err) @@ -362,9 +367,13 @@ static ssize_t refresh_interval_store(struct kobject *kobj, wsr->page_age = NULL; } + old_interval = READ_ONCE(wsr->refresh_interval); WRITE_ONCE(wsr->refresh_interval, msecs_to_jiffies(interval)); unlock: mutex_unlock(&wsr->page_age_lock); + if (!err && interval && + (!old_interval || jiffies_to_msecs(old_interval) > interval)) + wsr_wakeup_aging_thread(); return err ?: len; } @@ -454,7 +463,7 @@ static ssize_t page_age_show(struct kobject *kobj, struct kobj_attribute *attr, int ret = 0; struct wsr_state *wsr = kobj_to_wsr(kobj); - wsr_refresh_report(wsr, NULL, kobj_to_pgdat(kobj)); + wsr_refresh_report(wsr, NULL, kobj_to_pgdat(kobj), NULL); mutex_lock(&wsr->page_age_lock); if (!wsr->page_age) diff --git a/mm/workingset_report_aging.c b/mm/workingset_report_aging.c new file mode 100644 index 000000000000..91ad5020778a --- /dev/null +++ b/mm/workingset_report_aging.c @@ -0,0 +1,127 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Workingset report kernel aging thread + * + * Performs aging on behalf of memcgs with their configured refresh interval. + * While a userspace program can periodically read the page age breakdown + * per-memcg and trigger aging, the kernel performing aging is less overhead, + * more consistent, and more reliable for the use case where every memcg should + * be aged according to their refresh interval. + */ +#define pr_fmt(fmt) "workingset report aging: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static DECLARE_WAIT_QUEUE_HEAD(aging_wait); +static bool refresh_pending; + +static bool do_aging_node(int nid, unsigned long *next_wake_time) +{ + struct mem_cgroup *memcg; + bool should_wait = true; + struct pglist_data *pgdat = NODE_DATA(nid); + + memcg = mem_cgroup_iter(NULL, NULL, NULL); + do { + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); + struct wsr_state *wsr = &lruvec->wsr; + unsigned long refresh_time; + + /* use returned time to decide when to wake up next */ + if (wsr_refresh_report(wsr, memcg, pgdat, &refresh_time)) { + if (should_wait) { + should_wait = false; + *next_wake_time = refresh_time; + } else if (time_before(refresh_time, *next_wake_time)) { + *next_wake_time = refresh_time; + } + } + + cond_resched(); + } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL))); + + return should_wait; +} + +static int do_aging(void *unused) +{ + while (!kthread_should_stop()) { + int nid; + long timeout_ticks; + unsigned long next_wake_time; + bool should_wait = true; + + WRITE_ONCE(refresh_pending, false); + for_each_node_state(nid, N_MEMORY) { + unsigned long node_next_wake_time; + + if (do_aging_node(nid, &node_next_wake_time)) + continue; + if (should_wait) { + should_wait = false; + next_wake_time = node_next_wake_time; + } else if (time_before(node_next_wake_time, + next_wake_time)) { + next_wake_time = node_next_wake_time; + } + } + + if (should_wait) { + wait_event_interruptible(aging_wait, refresh_pending); + continue; + } + + /* sleep until next aging */ + timeout_ticks = next_wake_time - jiffies; + if (timeout_ticks > 0 && + timeout_ticks != MAX_SCHEDULE_TIMEOUT) { + schedule_timeout_idle(timeout_ticks); + continue; + } + } + return 0; +} + +/* Invoked when refresh_interval shortens or changes to a non-zero value. */ +void wsr_wakeup_aging_thread(void) +{ + WRITE_ONCE(refresh_pending, true); + wake_up_interruptible(&aging_wait); +} + +static struct task_struct *aging_thread; + +static int aging_init(void) +{ + struct task_struct *task; + + task = kthread_run(do_aging, NULL, "kagingd"); + + if (IS_ERR(task)) { + pr_err("Failed to create aging kthread\n"); + return PTR_ERR(task); + } + + aging_thread = task; + pr_info("module loaded\n"); + return 0; +} + +static void aging_exit(void) +{ + kthread_stop(aging_thread); + aging_thread = NULL; + pr_info("module unloaded\n"); +} + +module_init(aging_init); +module_exit(aging_exit); From patchwork Sat May 4 07:30:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuanchu Xie X-Patchwork-Id: 13653810 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 50B091097B for ; Sat, 4 May 2024 07:30:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714807861; cv=none; b=NN8cw8osOgHkpSMdCxywChdc48+rDWrfl9fuHOX15vp5WgVDspeXfUrX84RIhQ1wzCO5bix68b99Aa0grix7zynKcnVl3PjtLsrYx9+0OtGXWa8uns3Z8yupIwdA59Pl9yDE6kDeE94kWta5ck2pyCNptKWsx5eOFtNPmoI01zo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714807861; c=relaxed/simple; bh=BlDzWHCSjkNgKZ/KiEt5J8LN1NIFMw/gZZ4TExloyH8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Dq+cMfiNdEk8opFabYcPU6Fe/JpISD+e4UxibYsLoGjCLcXXqnEq5xbJq4pDfbxQClywX+KnzWP5E7i3xnzZbeuZFT0j2nOj471pZ79Qc3Qo3+LWrjrDgLCRzRnBcRNlkYgqpQQh+q9A3vkmINxAUQ/73wqtQeiHAFMuh+MQYd8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yuanchu.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=mcOowOJe; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yuanchu.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mcOowOJe" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-61be3f082b0so7202547b3.1 for ; Sat, 04 May 2024 00:30:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714807858; x=1715412658; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vL9lgW8qVIx8MeHGwPXStOCpDRl3GIu+/l/FFBxuYkQ=; b=mcOowOJeJ4vUxup4zJ5mgazocPAJJRbqJTlj67DugkjZNeX/tXLZWTRWfaSrYCN9Od HGKNQCqQksdkaxWevrt8Z1m7LS/3ym7r27ubqVlgk1Mvk2YAlOTTXYvgpYbc1QbLBhms QF/DxIpGD1J9FLGclP4Fu3tvrap4Ink5HTu/ipFmoFVPSSzAeX7TSbhnk7Q2CbeqwmlE rPbM7+qwuEtDhwZ26GpqqHcdYMUjYzs1Au6g1W3EKAqS+kAC19k6zPU/IYhaQWv719CP lzU6cSP7z+c7c/mjNpyoztX6OO2MtrsM6uhl+wBkDU/wCrio1nuI+XfomnY+m0Tcoiyc JLwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714807858; x=1715412658; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vL9lgW8qVIx8MeHGwPXStOCpDRl3GIu+/l/FFBxuYkQ=; b=aybZ0wlThZ4vgl21JPXZo8BdCG3os0xQ8E9EZrQlqmS7m+YTdvnfEfGWy137K0OuJn Lz3PxO2Bnket+bU6eAMTu8tXTDtggC9/98U4pUViFHom11JayE8q6DPnW8yphJJb94J+ 2YFwT1qViAtz1e1Xah0DZP93TAsiRbdkrj4RSK8ZjjG9EJMUd8dZf7FOZ22ltiwvR61C ej0iR1MeMA6CiIYpQHWoSXXE5UeFhzrhSYz9OuB6owMDBCIDMKZ7DpdShcqnW4BSG02G 0R1ZoiuTbtdNNCEWW9oIJBONNf4BPBdeXhJf6WzYrKBwZPmKDkSilISM/1WyqdbNEaf1 EitA== X-Forwarded-Encrypted: i=1; AJvYcCUnlEYBBDyVvWfg7Cb4iXySnf71M3YTVe+r5q5+B8KvTCJioQnFOHsZJ6cBx7DFwdzU9iIxc5he3Y11NNay85h21eURwr7MBEKqHVP1+JoO X-Gm-Message-State: AOJu0YyYvoPoLoAZYaSeA8vK8mK6XdMpMN12E3b4mPzXO+pMgrR6uZsx bhqgfwbom7N54+/gKSTcaMg1k4e/6VWsNWaFtpuCJVOx2D80LmxC+OmOyPchx6RZ00lnc+muYSu b8YHfAQ== X-Google-Smtp-Source: AGHT+IHnbCIEeEg7IOWMO6eHDa1eyzZhoU6gy1yqm8dJH1eRCMefg/4i4vquA5hk27zVzvfkaaQMHF1vGZK+ X-Received: from yuanchu-desktop.svl.corp.google.com ([2620:15c:2a3:200:da8f:bd07:9977:eb21]) (user=yuanchu job=sendgmr) by 2002:a25:2f53:0:b0:de1:d49:7ff6 with SMTP id v80-20020a252f53000000b00de10d497ff6mr611247ybv.7.1714807858358; Sat, 04 May 2024 00:30:58 -0700 (PDT) Date: Sat, 4 May 2024 00:30:11 -0700 In-Reply-To: <20240504073011.4000534-1-yuanchu@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240504073011.4000534-1-yuanchu@google.com> X-Mailer: git-send-email 2.45.0.rc1.225.g2a3ae87e7f-goog Message-ID: <20240504073011.4000534-8-yuanchu@google.com> Subject: [PATCH v1 7/7] selftest: test system-wide workingset reporting From: Yuanchu Xie To: David Hildenbrand , "Aneesh Kumar K.V" , Khalid Aziz , Henry Huang , Yu Zhao , Dan Williams , Gregory Price , Huang Ying Cc: Kalesh Singh , Wei Xu , David Rientjes , Greg Kroah-Hartman , "Rafael J. Wysocki" , Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Shuah Khan , Yosry Ahmed , Matthew Wilcox , Sudarshan Rajagopalan , Kairui Song , "Michael S. Tsirkin" , Vasily Averin , Nhat Pham , Miaohe Lin , Qi Zheng , Abel Wu , "Vishal Moola (Oracle)" , Kefeng Wang , Yuanchu Xie , linux-kernel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kselftest@vger.kernel.org A basic test that verifies the working set size of a simple memory accessor. It should work with or without the aging thread. Question: I don't know how to best test file memory in selftests. Is there a place where I should put the temporary file? /tmp can be tmpfs mounted in many distros. Signed-off-by: Yuanchu Xie --- tools/testing/selftests/mm/.gitignore | 1 + tools/testing/selftests/mm/Makefile | 3 + .../testing/selftests/mm/workingset_report.c | 317 +++++++++++++++++ .../testing/selftests/mm/workingset_report.h | 39 ++ .../selftests/mm/workingset_report_test.c | 332 ++++++++++++++++++ 5 files changed, 692 insertions(+) create mode 100644 tools/testing/selftests/mm/workingset_report.c create mode 100644 tools/testing/selftests/mm/workingset_report.h create mode 100644 tools/testing/selftests/mm/workingset_report_test.c diff --git a/tools/testing/selftests/mm/.gitignore b/tools/testing/selftests/mm/.gitignore index 4ff10ea61461..14a2412c8257 100644 --- a/tools/testing/selftests/mm/.gitignore +++ b/tools/testing/selftests/mm/.gitignore @@ -46,3 +46,4 @@ gup_longterm mkdirty va_high_addr_switch hugetlb_fault_after_madv +workingset_report_test diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index 2453add65d12..c0869bf07e99 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -70,6 +70,7 @@ TEST_GEN_FILES += ksm_tests TEST_GEN_FILES += ksm_functional_tests TEST_GEN_FILES += mdwe_test TEST_GEN_FILES += hugetlb_fault_after_madv +TEST_GEN_FILES += workingset_report_test ifneq ($(ARCH),arm64) TEST_GEN_FILES += soft-dirty @@ -123,6 +124,8 @@ $(TEST_GEN_FILES): vm_util.c thp_settings.c $(OUTPUT)/uffd-stress: uffd-common.c $(OUTPUT)/uffd-unit-tests: uffd-common.c +$(OUTPUT)/workingset_report_test: workingset_report.c + ifeq ($(ARCH),x86_64) BINARIES_32 := $(patsubst %,$(OUTPUT)/%,$(BINARIES_32)) BINARIES_64 := $(patsubst %,$(OUTPUT)/%,$(BINARIES_64)) diff --git a/tools/testing/selftests/mm/workingset_report.c b/tools/testing/selftests/mm/workingset_report.c new file mode 100644 index 000000000000..0d744bae5432 --- /dev/null +++ b/tools/testing/selftests/mm/workingset_report.c @@ -0,0 +1,317 @@ +// SPDX-License-Identifier: GPL-2.0 +#include "workingset_report.h" + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../kselftest.h" + +#define SYSFS_NODE_ONLINE "/sys/devices/system/node/online" +#define PROC_DROP_CACHES "/proc/sys/vm/drop_caches" + +/* Returns read len on success, or -errno on failure. */ +static ssize_t read_text(const char *path, char *buf, size_t max_len) +{ + ssize_t len; + int fd, err; + size_t bytes_read = 0; + + if (!max_len) + return -EINVAL; + + fd = open(path, O_RDONLY); + if (fd < 0) + return -errno; + + while (bytes_read < max_len - 1) { + len = read(fd, buf + bytes_read, max_len - 1 - bytes_read); + + if (len <= 0) + break; + bytes_read += len; + } + + buf[bytes_read] = '\0'; + + err = -errno; + close(fd); + return len < 0 ? err : bytes_read; +} + +/* Returns written len on success, or -errno on failure. */ +static ssize_t write_text(const char *path, const char *buf, ssize_t max_len) +{ + int fd, len, err; + size_t bytes_written = 0; + + fd = open(path, O_WRONLY | O_APPEND); + if (fd < 0) + return -errno; + + while (bytes_written < max_len) { + len = write(fd, buf + bytes_written, max_len - bytes_written); + + if (len < 0) + break; + bytes_written += len; + } + + err = -errno; + close(fd); + return len < 0 ? err : bytes_written; +} + +static long read_num(const char *path) +{ + char buf[21]; + + if (read_text(path, buf, sizeof(buf)) <= 0) + return -1; + return (long)strtoul(buf, NULL, 10); +} + +static int write_num(const char *path, unsigned long n) +{ + char buf[21]; + + sprintf(buf, "%lu", n); + if (write_text(path, buf, strlen(buf)) < 0) + return -1; + return 0; +} + +long sysfs_get_refresh_interval(int nid) +{ + char file[128]; + + snprintf( + file, + sizeof(file), + "/sys/devices/system/node/node%d/workingset_report/refresh_interval", + nid); + return read_num(file); +} + +int sysfs_set_refresh_interval(int nid, long interval) +{ + char file[128]; + + snprintf( + file, + sizeof(file), + "/sys/devices/system/node/node%d/workingset_report/refresh_interval", + nid); + return write_num(file, interval); +} + +int sysfs_get_page_age_intervals_str(int nid, char *buf, int len) +{ + char path[128]; + + snprintf( + path, + sizeof(path), + "/sys/devices/system/node/node%d/workingset_report/page_age_intervals", + nid); + return read_text(path, buf, len); + +} + +int sysfs_set_page_age_intervals_str(int nid, const char *buf, int len) +{ + char path[128]; + + snprintf( + path, + sizeof(path), + "/sys/devices/system/node/node%d/workingset_report/page_age_intervals", + nid); + return write_text(path, buf, len); +} + +int sysfs_set_page_age_intervals(int nid, const char *const intervals[], + int nr_intervals) +{ + char file[128]; + char buf[1024]; + int i; + int err, len = 0; + + for (i = 0; i < nr_intervals; ++i) { + err = snprintf(buf + len, sizeof(buf) - len, "%s", intervals[i]); + + if (err < 0) + return err; + len += err; + + if (i < nr_intervals - 1) { + err = snprintf(buf + len, sizeof(buf) - len, ","); + if (err < 0) + return err; + len += err; + } + } + + snprintf( + file, + sizeof(file), + "/sys/devices/system/node/node%d/workingset_report/page_age_intervals", + nid); + return write_text(file, buf, len); +} + +int get_nr_nodes(void) +{ + char buf[22]; + char *found; + + if (read_text(SYSFS_NODE_ONLINE, buf, sizeof(buf)) <= 0) + return -1; + found = strstr(buf, "-"); + if (found) + return (int)strtoul(found + 1, NULL, 10) + 1; + return (long)strtoul(buf, NULL, 10) + 1; +} + +int drop_pagecache(void) +{ + return write_num(PROC_DROP_CACHES, 1); +} + +ssize_t sysfs_page_age_read(int nid, char *buf, size_t len) + +{ + char file[128]; + + snprintf(file, + sizeof(file), + "/sys/devices/system/node/node%d/workingset_report/page_age", + nid); + return read_text(file, buf, len); +} + +/* + * Finds the first occurrence of "N\n" + * Modifies buf to terminate before the next occurrence of "N". + * Returns a substring of buf starting after "N\n" + */ +char *page_age_split_node(char *buf, int nid, char **next) +{ + char node_str[5]; + char *found; + int node_str_len; + + node_str_len = snprintf(node_str, sizeof(node_str), "N%u\n", nid); + + /* find the node prefix first */ + found = strstr(buf, node_str); + if (!found) { + ksft_print_msg("cannot find '%s' in page_idle_age", node_str); + return NULL; + } + found += node_str_len; + + *next = strchr(found, 'N'); + if (*next) + *(*next - 1) = '\0'; + + return found; +} + +ssize_t page_age_read(const char *buf, const char *interval, int pagetype) +{ + static const char * const type[ANON_AND_FILE] = { "anon=", "file=" }; + char *found; + + found = strstr(buf, interval); + if (!found) { + ksft_print_msg("cannot find %s in page_age", interval); + return -1; + } + found = strstr(found, type[pagetype]); + if (!found) { + ksft_print_msg("cannot find %s in page_age", type[pagetype]); + return -1; + } + found += strlen(type[pagetype]); + return (long)strtoul(found, NULL, 10); +} + +static const char *TEMP_FILE = "/tmp/workingset_selftest"; +void cleanup_file_workingset(void) +{ + remove(TEMP_FILE); +} + +int alloc_file_workingset(void *arg) +{ + int err = 0; + char *ptr; + int fd; + int ppid; + char *mapped; + size_t size = (size_t)arg; + size_t page_size = getpagesize(); + + ppid = getppid(); + + fd = open(TEMP_FILE, O_RDWR | O_CREAT); + if (fd < 0) { + err = -errno; + ksft_perror("failed to open temp file\n"); + goto cleanup; + } + + if (fallocate(fd, 0, 0, size) < 0) { + err = -errno; + ksft_perror("fallocate"); + goto cleanup; + } + + mapped = (char *)mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, + fd, 0); + if (mapped == NULL) { + err = -errno; + ksft_perror("mmap"); + goto cleanup; + } + + while (getppid() == ppid) { + sync(); + for (ptr = mapped; ptr < mapped + size; ptr += page_size) + *ptr = *ptr ^ 0xFF; + } + +cleanup: + cleanup_file_workingset(); + return err; +} + +int alloc_anon_workingset(void *arg) +{ + char *buf, *ptr; + int ppid = getppid(); + size_t size = (size_t)arg; + size_t page_size = getpagesize(); + + buf = malloc(size); + + if (!buf) { + ksft_print_msg("cannot allocate anon workingset"); + exit(1); + } + + while (getppid() == ppid) { + for (ptr = buf; ptr < buf + size; ptr += page_size) + *ptr = *ptr ^ 0xFF; + } + + free(buf); + return 0; +} diff --git a/tools/testing/selftests/mm/workingset_report.h b/tools/testing/selftests/mm/workingset_report.h new file mode 100644 index 000000000000..c5c281e4069b --- /dev/null +++ b/tools/testing/selftests/mm/workingset_report.h @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef WORKINGSET_REPORT_H_ +#define WORKINGSET_REPORT_H_ + +#ifndef _GNU_SOURCE +#define _GNU_SOURCE +#endif + +#include +#include +#include +#include +#include + +#define PAGETYPE_ANON 0 +#define PAGETYPE_FILE 1 +#define ANON_AND_FILE 2 + +int get_nr_nodes(void); +int drop_pagecache(void); + +long sysfs_get_refresh_interval(int nid); +int sysfs_set_refresh_interval(int nid, long interval); + +int sysfs_get_page_age_intervals_str(int nid, char *buf, int len); +int sysfs_set_page_age_intervals_str(int nid, const char *buf, int len); + +int sysfs_set_page_age_intervals(int nid, const char *const intervals[], + int nr_intervals); + +char *page_age_split_node(char *buf, int nid, char **next); +ssize_t sysfs_page_age_read(int nid, char *buf, size_t len); +ssize_t page_age_read(const char *buf, const char *interval, int pagetype); + +int alloc_file_workingset(void *arg); +void cleanup_file_workingset(void); +int alloc_anon_workingset(void *arg); + +#endif /* WORKINGSET_REPORT_H_ */ diff --git a/tools/testing/selftests/mm/workingset_report_test.c b/tools/testing/selftests/mm/workingset_report_test.c new file mode 100644 index 000000000000..9a86c2215182 --- /dev/null +++ b/tools/testing/selftests/mm/workingset_report_test.c @@ -0,0 +1,332 @@ +// SPDX-License-Identifier: GPL-2.0 +#include "workingset_report.h" + +#include +#include +#include +#include + +#include "../clone3/clone3_selftests.h" + +#define REFRESH_INTERVAL 5000 +#define MB(x) (x << 20) + +static void sleep_ms(int milliseconds) +{ + struct timespec ts; + + ts.tv_sec = milliseconds / 1000; + ts.tv_nsec = (milliseconds % 1000) * 1000000; + nanosleep(&ts, NULL); +} + +/* + * Checks if two given values differ by less than err% of their sum. + */ +static inline int values_close(long a, long b, int err) +{ + return labs(a - b) <= (a + b) / 100 * err; +} + +static const char * const PAGE_AGE_INTERVALS[] = { + "6000", "10000", "15000", "18446744073709551615", +}; +#define NR_PAGE_AGE_INTERVALS (ARRAY_SIZE(PAGE_AGE_INTERVALS)) + +static int set_page_age_intervals_all_nodes(const char *intervals, int nr_nodes) +{ + int i; + + for (i = 0; i < nr_nodes; ++i) { + int err = sysfs_set_page_age_intervals_str( + i, &intervals[i * 1024], strlen(&intervals[i * 1024])); + + if (err < 0) + return err; + } + return 0; +} + +static int get_page_age_intervals_all_nodes(char *intervals, int nr_nodes) +{ + int i; + + for (i = 0; i < nr_nodes; ++i) { + int err = sysfs_get_page_age_intervals_str( + i, &intervals[i * 1024], 1024); + + if (err < 0) + return err; + } + return 0; +} + +static int set_refresh_interval_all_nodes(const long *interval, int nr_nodes) +{ + int i; + + for (i = 0; i < nr_nodes; ++i) { + int err = sysfs_set_refresh_interval(i, interval[i]); + + if (err < 0) + return err; + } + return 0; +} + +static int get_refresh_interval_all_nodes(long *interval, int nr_nodes) +{ + int i; + + for (i = 0; i < nr_nodes; ++i) { + long val = sysfs_get_refresh_interval(i); + + if (val < 0) + return val; + interval[i] = val; + } + return 0; +} + +static pid_t clone_and_run(int fn(void *arg), void *arg) +{ + pid_t pid; + + struct __clone_args args = { + .exit_signal = SIGCHLD, + }; + + pid = sys_clone3(&args, sizeof(struct __clone_args)); + + if (pid == 0) + exit(fn(arg)); + + return pid; +} + +static int read_workingset(int pagetype, int nid, + unsigned long page_age[NR_PAGE_AGE_INTERVALS]) +{ + int i, err; + char buf[4096]; + + err = sysfs_page_age_read(nid, buf, sizeof(buf)); + if (err < 0) + return err; + + for (i = 0; i < NR_PAGE_AGE_INTERVALS; ++i) { + err = page_age_read(buf, PAGE_AGE_INTERVALS[i], pagetype); + if (err < 0) + return err; + page_age[i] = err; + } + + return 0; +} + +static ssize_t read_interval_all_nodes(int pagetype, int interval) +{ + int i, err; + unsigned long page_age[NR_PAGE_AGE_INTERVALS]; + ssize_t ret = 0; + int nr_nodes = get_nr_nodes(); + + for (i = 0; i < nr_nodes; ++i) { + err = read_workingset(pagetype, i, page_age); + if (err < 0) + return err; + + ret += page_age[interval]; + } + + return ret; +} + +#define TEST_SIZE MB(500l) + +static int run_test(int f(void)) +{ + int i, err, test_result; + long *old_refresh_intervals; + long *new_refresh_intervals; + char *old_page_age_intervals; + int nr_nodes = get_nr_nodes(); + + if (nr_nodes <= 0) { + ksft_print_msg("failed to get nr_nodes\n"); + return KSFT_FAIL; + } + + old_refresh_intervals = calloc(nr_nodes, sizeof(long)); + new_refresh_intervals = calloc(nr_nodes, sizeof(long)); + old_page_age_intervals = calloc(nr_nodes, 1024); + + if (!(old_refresh_intervals && new_refresh_intervals && + old_page_age_intervals)) { + ksft_print_msg("failed to allocate memory for intervals\n"); + return KSFT_FAIL; + } + + err = get_refresh_interval_all_nodes(old_refresh_intervals, nr_nodes); + if (err < 0) { + ksft_print_msg("failed to read refresh interval\n"); + return KSFT_FAIL; + } + + err = get_page_age_intervals_all_nodes(old_page_age_intervals, nr_nodes); + if (err < 0) { + ksft_print_msg("failed to read page age interval\n"); + return KSFT_FAIL; + } + + for (i = 0; i < nr_nodes; ++i) + new_refresh_intervals[i] = REFRESH_INTERVAL; + + for (i = 0; i < nr_nodes; ++i) { + err = sysfs_set_page_age_intervals(i, PAGE_AGE_INTERVALS, + NR_PAGE_AGE_INTERVALS - 1); + if (err < 0) { + ksft_print_msg("failed to set page age interval\n"); + test_result = KSFT_FAIL; + goto fail; + } + } + + err = set_refresh_interval_all_nodes(new_refresh_intervals, nr_nodes); + if (err < 0) { + ksft_print_msg("failed to set refresh interval\n"); + test_result = KSFT_FAIL; + goto fail; + } + + sync(); + drop_pagecache(); + + test_result = f(); + +fail: + err = set_refresh_interval_all_nodes(old_refresh_intervals, nr_nodes); + if (err < 0) { + ksft_print_msg("failed to restore refresh interval\n"); + test_result = KSFT_FAIL; + } + err = set_page_age_intervals_all_nodes(old_page_age_intervals, nr_nodes); + if (err < 0) { + ksft_print_msg("failed to restore page age interval\n"); + test_result = KSFT_FAIL; + } + return test_result; +} + +static int test_file(void) +{ + ssize_t ws_size_ref, ws_size_test; + int ret = KSFT_FAIL, i; + pid_t pid = 0; + + ws_size_ref = read_interval_all_nodes(PAGETYPE_FILE, 0); + if (ws_size_ref < 0) + goto cleanup; + + pid = clone_and_run(alloc_file_workingset, (void *)TEST_SIZE); + if (pid < 0) + goto cleanup; + + read_interval_all_nodes(PAGETYPE_FILE, 0); + sleep_ms(REFRESH_INTERVAL); + + for (i = 0; i < 3; ++i) { + sleep_ms(REFRESH_INTERVAL); + ws_size_test = read_interval_all_nodes(PAGETYPE_FILE, 0); + ws_size_test += read_interval_all_nodes(PAGETYPE_FILE, 1); + if (ws_size_test < 0) + goto cleanup; + + if (!values_close(ws_size_test - ws_size_ref, TEST_SIZE, 10)) { + ksft_print_msg( + "file working set size difference too large: actual=%ld, expected=%ld\n", + ws_size_test - ws_size_ref, TEST_SIZE); + goto cleanup; + } + } + ret = KSFT_PASS; + +cleanup: + if (pid > 0) + kill(pid, SIGKILL); + cleanup_file_workingset(); + return ret; +} + +static int test_anon(void) +{ + ssize_t ws_size_ref, ws_size_test; + pid_t pid = 0; + int ret = KSFT_FAIL, i; + + ws_size_ref = read_interval_all_nodes(PAGETYPE_ANON, 0); + if (ws_size_ref < 0) + goto cleanup; + + pid = clone_and_run(alloc_anon_workingset, (void *)TEST_SIZE); + if (pid < 0) + goto cleanup; + + sleep_ms(REFRESH_INTERVAL); + read_interval_all_nodes(PAGETYPE_ANON, 0); + + for (i = 0; i < 5; ++i) { + sleep_ms(REFRESH_INTERVAL); + ws_size_test = read_interval_all_nodes(PAGETYPE_ANON, 0); + ws_size_test += read_interval_all_nodes(PAGETYPE_ANON, 1); + if (ws_size_test < 0) + goto cleanup; + + if (!values_close(ws_size_test - ws_size_ref, TEST_SIZE, 10)) { + ksft_print_msg( + "anon working set size difference too large: actual=%ld, expected=%ld\n", + ws_size_test - ws_size_ref, TEST_SIZE); + goto cleanup; + } + } + ret = KSFT_PASS; + +cleanup: + if (pid > 0) + kill(pid, SIGKILL); + return ret; +} + + +#define T(x) { x, #x } +struct workingset_test { + int (*fn)(void); + const char *name; +} tests[] = { + T(test_anon), + T(test_file), +}; +#undef T + +int main(int argc, char **argv) +{ + int ret = EXIT_SUCCESS, i, err; + + for (i = 0; i < ARRAY_SIZE(tests); i++) { + err = run_test(tests[i].fn); + switch (err) { + case KSFT_PASS: + ksft_test_result_pass("%s\n", tests[i].name); + break; + case KSFT_SKIP: + ksft_test_result_skip("%s\n", tests[i].name); + break; + default: + ret = EXIT_FAILURE; + ksft_test_result_fail("%s with error %d\n", + tests[i].name, err); + break; + } + } + return ret; +}