From patchwork Tue Jul 30 23:13:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: David Finkel X-Patchwork-Id: 13747966 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7353AC3DA7F for ; Tue, 30 Jul 2024 23:13:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0A2076B0083; Tue, 30 Jul 2024 19:13:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 050B26B0085; Tue, 30 Jul 2024 19:13:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E0EAE6B0089; Tue, 30 Jul 2024 19:13:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id BB1A46B0083 for ; Tue, 30 Jul 2024 19:13:22 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 6A0C5A012D for ; Tue, 30 Jul 2024 23:13:22 +0000 (UTC) X-FDA: 82397972244.25.8B6E89E Received: from m35-116.mailgun.net (m35-116.mailgun.net [69.72.35.116]) by imf25.hostedemail.com (Postfix) with ESMTP id 83E78A0027 for ; Tue, 30 Jul 2024 23:13:20 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=relay.vimeo.com header.s=mailo header.b=xG7KyUoZ; spf=pass (imf25.hostedemail.com: domain of "bounce+ea57f2.9d2a1c-linux-mm=kvack.org@relay.vimeo.com" designates 69.72.35.116 as permitted sender) smtp.mailfrom="bounce+ea57f2.9d2a1c-linux-mm=kvack.org@relay.vimeo.com"; dmarc=pass (policy=reject) header.from=vimeo.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722381196; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2uOIVuV3D7K8CVMLwDlYoYHv6kG494vY/22/pULUFSA=; b=ypiQ1Il6/KdGgELkdFf7YUnUHO/cG3jHE8SPBeJvg9LbIAT5dnp7l/Padp1BO4sy2gvHd+ aCoWB7fvVK88wU/fYhrxqkntqV6L+hpR6bf05iyWg+RSNLuT8hOhw5b4d/9QHyikw17T7+ /L1Sm1O6gzKSC9wk9RJNsTGX1EL0lp4= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=relay.vimeo.com header.s=mailo header.b=xG7KyUoZ; spf=pass (imf25.hostedemail.com: domain of "bounce+ea57f2.9d2a1c-linux-mm=kvack.org@relay.vimeo.com" designates 69.72.35.116 as permitted sender) smtp.mailfrom="bounce+ea57f2.9d2a1c-linux-mm=kvack.org@relay.vimeo.com"; dmarc=pass (policy=reject) header.from=vimeo.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722381196; a=rsa-sha256; cv=none; b=S6BbtWU0j2JPtbzG9fTwlORRTGk4YWqppHNdC1jdMfpvGu9YyGBV7CT8Y0kGWYx3N2aSaS betnwvGjn1ty5HxJ14k5kYrefu65ZE71m+ZnRYmmUfbLwuDnRLwevo0KPauTZhaGFlW2C3 YSGhik+DaRhRY5NKDGouv6Wia3eaq/A= DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=relay.vimeo.com; q=dns/txt; s=mailo; t=1722381199; x=1722388399; h=Content-Transfer-Encoding: Content-Type: MIME-Version: References: In-Reply-To: Message-Id: Date: Subject: Subject: Cc: To: To: From: From: Sender: Sender; bh=2uOIVuV3D7K8CVMLwDlYoYHv6kG494vY/22/pULUFSA=; b=xG7KyUoZ52I4JLgS/Rq2DgXUg2DihSIkHclUeiU0HwHpHSwxrNOM0ze3wLdROc8pO80LYVqNX/3C/gONhYFepunjgZRm+WeX4LtjZ6P/n9shRVzRD00Nduq1ocy44BWx7wUeaeI0A68RGkC0GoiDpZhtOABna/Tlji5M/laT3/Y= X-Mailgun-Sending-Ip: 69.72.35.116 X-Mailgun-Sid: WyI5NTRmYiIsImxpbnV4LW1tQGt2YWNrLm9yZyIsIjlkMmExYyJd Received: from smtp.vimeo.com (215.71.185.35.bc.googleusercontent.com [35.185.71.215]) by 2ad2ce22b2a4 with SMTP id 66a9738f2c8e1673cb6a86bd (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Tue, 30 Jul 2024 23:13:19 GMT Received: from nutau (gke-sre-us-east1-main-7f6ba6de-7c30.c.vimeo-core.internal [10.56.27.212]) by smtp.vimeo.com (Postfix) with ESMTP id 3491665CC2; Tue, 30 Jul 2024 23:13:19 +0000 (UTC) Received: by nutau (Postfix, from userid 1001) id F366AB40AC9; Tue, 30 Jul 2024 19:13:18 -0400 (EDT) From: David Finkel To: Muchun Song , Tejun Heo , Roman Gushchin , Andrew Morton Cc: core-services@vimeo.com, Jonathan Corbet , Michal Hocko , Shakeel Butt , Shuah Khan , Johannes Weiner , Zefan Li , cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, =?utf-8?q?Michal_Koutn=C3=BD?= , David Finkel , Waiman Long Subject: [PATCH v7 1/2] mm, memcg: cg2 memory{.swap,}.peak write handlers Date: Tue, 30 Jul 2024 19:13:03 -0400 Message-Id: <20240730231304.761942-2-davidf@vimeo.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240730231304.761942-1-davidf@vimeo.com> References: <20240730231304.761942-1-davidf@vimeo.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: ibenrh9c6hpcgh15ancr646ddyqwb3zr X-Rspamd-Queue-Id: 83E78A0027 X-Rspamd-Server: rspam11 X-HE-Tag: 1722381200-706508 X-HE-Meta: U2FsdGVkX1/PxP50VbBSR1lHwsUw7dkJTRgPyO83ME+xeypRZHgQeLdd2U2+7q87lwTSaJoKy8x6oSX84jac9NOK4epztzTIGNurMGxz+M9WKxohhk+847LqHQvOgab8VqdQSbE2UyLnG/2fIgkK2GPiYZshKI+4t/TTv2H5569KelH+P3B4fOTxFl6Wjq5ksy70kVjLKEHIdN2SgYlSK151KTVIeDBv2K7uvkvFXZ9jb6/u3lt1ExaxkSAQvX+K9Dl60LrPrgZPnjU4/HE6Dh9nJr081wwYG9ztiqOIqIpH6Wzl2Alxkyjzuh1trrzl+iT/b+Cmr4cuLsfM3oCQjlImw7FkIwJ7VQB+LSZZkrKIXm2GqrbLg7j4uMVVY9IDxwYxd+O3+8PQiAS7DSWK96/e4i2SZ0LWPf+UoBQRZ0bshp1I+prSd3eGw/hYNBcH1/RMl2L122eywV+UXpSGcX0x3NufvwAkDcAnEoDruxkv/BqaTeZ7pbXR+xnVhgSc4yGXhYr5rPX/3b4Tyj/7fKYEs4oniRFVT8aM4yLNebBa40ki83Re63wHK3Ag6KOmse2iPtn1WtSMjDz+TJ0QRhmUN3KRqu6TpzIHTGguAaDPo1QWH8ibYfSwfqZhoHjatzsuAzfGeOzsNdpIyoGyjiHb2Vgv9mCWKhck+KVMF71XmFBss2IALspRzUvDnY+M325PZE9qXN2FMRPXHPAd9lTllTwH6Jbvb3DLl15LEtdy79Qj0vtEAqL3EkG1/YmBFKLvWwwJXVTqwa3rvtgksth8i3WO5O04yXJnEShAD4VfGYRdSZZG6AjI9yweL07xuFgLFF5ywORb9ggSXShEPaCpfgem0pkHt/N5EGbW7p6/f5eJOxvulyGhGA65/ejo40Y19xJgFeGMvz1xEKPNFqOnX05ak0cPkL5z33/E/1OH/EhDbp5WkGkunPPXZUCsXRvDVCgNr6mHEZCxXWk H1ylqdF7 tisY+1HRN8HqV6uP9wOmSL+OvaIts8MYRCe9XJ+Tc6e2bQPhJzREPHrMUt97aTyPmZr212twqk3LI5zVjnHghafgR7HLL9CMjwyxpcYsV9FYrrJirWnayIU/zum1lGNLrnwna4sKIVJjgO8bKfsHGQncrDIIcCvs8xMBfuGkl8EI9+CMQCAIJHSlsy0hj7dP+1WYMs5PkTuiImOJa5IPDFsOvn0w/my0dnrTtrUh9UbKa2DRXAUbkKtE3QF57IEz/HBu5LvHDKlZIp+I7WIVuQ4lbJQ9Oo1wfYglBkoKeezwjsZ/g5ozfwJdfYHBPX1V76aRSVnOb7D8mu7CiDQeGsKZB7/F2SleeRpT/4MkylmTucOE6gO9AMRFdOltM7bmzKMAvToQzFjm9UQdU0hshX0JUqUIdWA+MtjvgUpH8v+ulfa+cbNHaAzL6OTUhNeqq21Yoondqcr007cpz9YvjNeZNxY3kQJ5qtfTFkWdkbB+W42q6tbSi9g1gxXHqd01ZjP9//VFbTEY7R0pGyZbEE9/Pfpa4Tp28U71ZziB1jpqiUfJzIOcLMeATtxckR5w+1xMBbmdLKLB1EEmpKyEZkhqEzpaWrETflDYW+xs1IGDTCiADkakpKn9xh6uHFUAjKKbXrrLsc4WFbKOXVPZKdFLehnOwWgDOEFRg X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Other mechanisms for querying the peak memory usage of either a process or v1 memory cgroup allow for resetting the high watermark. Restore parity with those mechanisms, but with a less racy API. For example: - Any write to memory.max_usage_in_bytes in a cgroup v1 mount resets the high watermark. - writing "5" to the clear_refs pseudo-file in a processes's proc directory resets the peak RSS. This change is an evolution of a previous patch, which mostly copied the cgroup v1 behavior, however, there were concerns about races/ownership issues with a global reset, so instead this change makes the reset filedescriptor-local. Writing any non-empty string to the memory.peak and memory.swap.peak pseudo-files reset the high watermark to the current usage for subsequent reads through that same FD. Notably, following Johannes's suggestion, this implementation moves the O(FDs that have written) behavior onto the FD write(2) path. Instead, on the page-allocation path, we simply add one additional watermark to conditionally bump per-hierarchy level in the page-counter. Additionally, this takes Longman's suggestion of nesting the page-charging-path checks for the two watermarks to reduce the number of common-case comparisons. This behavior is particularly useful for work scheduling systems that need to track memory usage of worker processes/cgroups per-work-item. Since memory can't be squeezed like CPU can (the OOM-killer has opinions), these systems need to track the peak memory usage to compute system/container fullness when binpacking workitems. Most notably, Vimeo's use-case involves a system that's doing global binpacking across many Kubernetes pods/containers, and while we can use PSI for some local decisions about overload, we strive to avoid packing workloads too tightly in the first place. To facilitate this, we track the peak memory usage. However, since we run with long-lived workers (to amortize startup costs) we need a way to track the high watermark while a work-item is executing. Polling runs the risk of missing short spikes that last for timescales below the polling interval, and peak memory tracking at the cgroup level is otherwise perfect for this use-case. As this data is used to ensure that binpacked work ends up with sufficient headroom, this use-case mostly avoids the inaccuracies surrounding reclaimable memory. Suggested-by: Johannes Weiner Suggested-by: Waiman Long Acked-by: Johannes Weiner Reviewed-by: Michal Koutný Signed-off-by: David Finkel --- Documentation/admin-guide/cgroup-v2.rst | 22 +++-- include/linux/cgroup-defs.h | 5 + include/linux/cgroup.h | 3 + include/linux/memcontrol.h | 5 + include/linux/page_counter.h | 11 ++- kernel/cgroup/cgroup-internal.h | 2 + kernel/cgroup/cgroup.c | 7 ++ mm/memcontrol.c | 116 ++++++++++++++++++++++-- mm/page_counter.c | 30 ++++-- 9 files changed, 174 insertions(+), 27 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index 86311c2907cd3..f0499884124d2 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1333,11 +1333,14 @@ The following nested keys are defined. all the existing limitations and potential future extensions. memory.peak - A read-only single value file which exists on non-root - cgroups. + A read-write single value file which exists on non-root cgroups. + + The max memory usage recorded for the cgroup and its descendants since + either the creation of the cgroup or the most recent reset for that FD. - The max memory usage recorded for the cgroup and its - descendants since the creation of the cgroup. + A write of any non-empty string to this file resets it to the + current memory usage for subsequent reads through the same + file descriptor. memory.oom.group A read-write single value file which exists on non-root @@ -1663,11 +1666,14 @@ The following nested keys are defined. Healthy workloads are not expected to reach this limit. memory.swap.peak - A read-only single value file which exists on non-root - cgroups. + A read-write single value file which exists on non-root cgroups. + + The max swap usage recorded for the cgroup and its descendants since + the creation of the cgroup or the most recent reset for that FD. - The max swap usage recorded for the cgroup and its - descendants since the creation of the cgroup. + A write of any non-empty string to this file resets it to the + current memory usage for subsequent reads through the same + file descriptor. memory.swap.max A read-write single value file which exists on non-root diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h index ae04035b6cbe5..7fc2d0195f560 100644 --- a/include/linux/cgroup-defs.h +++ b/include/linux/cgroup-defs.h @@ -775,6 +775,11 @@ struct cgroup_subsys { extern struct percpu_rw_semaphore cgroup_threadgroup_rwsem; +struct cgroup_of_peak { + unsigned long value; + struct list_head list; +}; + /** * cgroup_threadgroup_change_begin - threadgroup exclusion for cgroups * @tsk: target task diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h index c60ba0ab14627..3e0563753cc3e 100644 --- a/include/linux/cgroup.h +++ b/include/linux/cgroup.h @@ -11,6 +11,7 @@ #include #include +#include #include #include #include @@ -854,4 +855,6 @@ static inline void cgroup_bpf_put(struct cgroup *cgrp) {} struct cgroup *task_get_cgroup1(struct task_struct *tsk, int hierarchy_id); +struct cgroup_of_peak *of_peak(struct kernfs_open_file *of); + #endif /* _LINUX_CGROUP_H */ diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 0e5bf25d324f0..cc74d73d3b065 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -193,6 +193,11 @@ struct mem_cgroup { struct page_counter memsw; /* v1 only */ }; + /* registered local peak watchers */ + struct list_head memory_peaks; + struct list_head swap_peaks; + spinlock_t peaks_lock; + /* Range enforcement for interrupt charges */ struct work_struct high_work; diff --git a/include/linux/page_counter.h b/include/linux/page_counter.h index 904c52f97284f..898f562c0b838 100644 --- a/include/linux/page_counter.h +++ b/include/linux/page_counter.h @@ -26,6 +26,8 @@ struct page_counter { atomic_long_t children_low_usage; unsigned long watermark; + /* Latest cg2 reset watermark */ + unsigned long local_watermark; unsigned long failcnt; /* Keep all the read most fields in a separete cacheline. */ @@ -78,7 +80,14 @@ int page_counter_memparse(const char *buf, const char *max, static inline void page_counter_reset_watermark(struct page_counter *counter) { - counter->watermark = page_counter_read(counter); + unsigned long usage = page_counter_read(counter); + + /* + * Update local_watermark first, so it's always <= watermark + * (modulo CPU/compiler re-ordering) + */ + counter->local_watermark = usage; + counter->watermark = usage; } void page_counter_calculate_protection(struct page_counter *root, diff --git a/kernel/cgroup/cgroup-internal.h b/kernel/cgroup/cgroup-internal.h index 520b90dd97eca..c964dd7ff967a 100644 --- a/kernel/cgroup/cgroup-internal.h +++ b/kernel/cgroup/cgroup-internal.h @@ -81,6 +81,8 @@ struct cgroup_file_ctx { struct { struct cgroup_pidlist *pidlist; } procs1; + + struct cgroup_of_peak peak; }; /* diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index c8e4b62b436a4..0a97cb2ef1245 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -1972,6 +1972,13 @@ static int cgroup2_parse_param(struct fs_context *fc, struct fs_parameter *param return -EINVAL; } +struct cgroup_of_peak *of_peak(struct kernfs_open_file *of) +{ + struct cgroup_file_ctx *ctx = of->priv; + + return &ctx->peak; +} + static void apply_cgroup_root_flags(unsigned int root_flags) { if (current->nsproxy->cgroup_ns == &init_cgroup_ns) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 9603717886877..2663e2108cdbe 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -25,6 +25,7 @@ * Copyright (C) 2020 Alibaba, Inc, Alex Shi */ +#include #include #include #include @@ -41,6 +42,7 @@ #include #include #include +#include #include #include #include @@ -3558,6 +3560,9 @@ static struct mem_cgroup *mem_cgroup_alloc(struct mem_cgroup *parent) INIT_WORK(&memcg->high_work, high_work_func); vmpressure_init(&memcg->vmpressure); + INIT_LIST_HEAD(&memcg->memory_peaks); + INIT_LIST_HEAD(&memcg->swap_peaks); + spin_lock_init(&memcg->peaks_lock); memcg->socket_pressure = jiffies; memcg1_memcg_init(memcg); memcg->kmemcg_id = -1; @@ -3950,14 +3955,91 @@ static u64 memory_current_read(struct cgroup_subsys_state *css, return (u64)page_counter_read(&memcg->memory) * PAGE_SIZE; } -static u64 memory_peak_read(struct cgroup_subsys_state *css, - struct cftype *cft) +#define OFP_PEAK_UNSET (((-1UL))) + +static int peak_show(struct seq_file *sf, void *v, struct page_counter *pc) { - struct mem_cgroup *memcg = mem_cgroup_from_css(css); + struct cgroup_of_peak *ofp = of_peak(sf->private); + u64 fd_peak = READ_ONCE(ofp->value), peak; + + /* User wants global or local peak? */ + if (fd_peak == OFP_PEAK_UNSET) + peak = pc->watermark; + else + peak = max(fd_peak, READ_ONCE(pc->local_watermark)); + + seq_printf(sf, "%llu\n", peak * PAGE_SIZE); + return 0; +} + +static int memory_peak_show(struct seq_file *sf, void *v) +{ + struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(sf)); + + return peak_show(sf, v, &memcg->memory); +} + +static int peak_open(struct kernfs_open_file *of) +{ + struct cgroup_of_peak *ofp = of_peak(of); + + ofp->value = OFP_PEAK_UNSET; + return 0; +} + +static void peak_release(struct kernfs_open_file *of) +{ + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); + struct cgroup_of_peak *ofp = of_peak(of); + + if (ofp->value == OFP_PEAK_UNSET) { + /* fast path (no writes on this fd) */ + return; + } + spin_lock(&memcg->peaks_lock); + list_del(&ofp->list); + spin_unlock(&memcg->peaks_lock); +} + +static ssize_t peak_write(struct kernfs_open_file *of, char *buf, size_t nbytes, + loff_t off, struct page_counter *pc, + struct list_head *watchers) +{ + unsigned long usage; + struct cgroup_of_peak *peer_ctx; + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); + struct cgroup_of_peak *ofp = of_peak(of); + + spin_lock(&memcg->peaks_lock); + + usage = page_counter_read(pc); + WRITE_ONCE(pc->local_watermark, usage); + + list_for_each_entry(peer_ctx, watchers, list) + if (usage > peer_ctx->value) + WRITE_ONCE(peer_ctx->value, usage); + + /* initial write, register watcher */ + if (ofp->value == -1) + list_add(&ofp->list, watchers); + + WRITE_ONCE(ofp->value, usage); + spin_unlock(&memcg->peaks_lock); + + return nbytes; +} + +static ssize_t memory_peak_write(struct kernfs_open_file *of, char *buf, + size_t nbytes, loff_t off) +{ + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); - return (u64)memcg->memory.watermark * PAGE_SIZE; + return peak_write(of, buf, nbytes, off, &memcg->memory, + &memcg->memory_peaks); } +#undef OFP_PEAK_UNSET + static int memory_min_show(struct seq_file *m, void *v) { return seq_puts_memcg_tunable(m, @@ -4307,7 +4389,10 @@ static struct cftype memory_files[] = { { .name = "peak", .flags = CFTYPE_NOT_ON_ROOT, - .read_u64 = memory_peak_read, + .open = peak_open, + .release = peak_release, + .seq_show = memory_peak_show, + .write = memory_peak_write, }, { .name = "min", @@ -5099,12 +5184,20 @@ static u64 swap_current_read(struct cgroup_subsys_state *css, return (u64)page_counter_read(&memcg->swap) * PAGE_SIZE; } -static u64 swap_peak_read(struct cgroup_subsys_state *css, - struct cftype *cft) +static int swap_peak_show(struct seq_file *sf, void *v) { - struct mem_cgroup *memcg = mem_cgroup_from_css(css); + struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(sf)); + + return peak_show(sf, v, &memcg->swap); +} + +static ssize_t swap_peak_write(struct kernfs_open_file *of, char *buf, + size_t nbytes, loff_t off) +{ + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); - return (u64)memcg->swap.watermark * PAGE_SIZE; + return peak_write(of, buf, nbytes, off, &memcg->swap, + &memcg->swap_peaks); } static int swap_high_show(struct seq_file *m, void *v) @@ -5188,7 +5281,10 @@ static struct cftype swap_files[] = { { .name = "swap.peak", .flags = CFTYPE_NOT_ON_ROOT, - .read_u64 = swap_peak_read, + .open = peak_open, + .release = peak_release, + .seq_show = swap_peak_show, + .write = swap_peak_write, }, { .name = "swap.events", diff --git a/mm/page_counter.c b/mm/page_counter.c index 0153f5bb31611..ad9bdde5d5d20 100644 --- a/mm/page_counter.c +++ b/mm/page_counter.c @@ -79,9 +79,22 @@ void page_counter_charge(struct page_counter *counter, unsigned long nr_pages) /* * This is indeed racy, but we can live with some * inaccuracy in the watermark. + * + * Notably, we have two watermarks to allow for both a globally + * visible peak and one that can be reset at a smaller scope. + * + * Since we reset both watermarks when the global reset occurs, + * we can guarantee that watermark >= local_watermark, so we + * don't need to do both comparisons every time. + * + * On systems with branch predictors, the inner condition should + * be almost free. */ - if (new > READ_ONCE(c->watermark)) - WRITE_ONCE(c->watermark, new); + if (new > READ_ONCE(c->local_watermark)) { + WRITE_ONCE(c->local_watermark, new); + if (new > READ_ONCE(c->watermark)) + WRITE_ONCE(c->watermark, new); + } } } @@ -129,12 +142,13 @@ bool page_counter_try_charge(struct page_counter *counter, goto failed; } propagate_protected_usage(c, new); - /* - * Just like with failcnt, we can live with some - * inaccuracy in the watermark. - */ - if (new > READ_ONCE(c->watermark)) - WRITE_ONCE(c->watermark, new); + + /* see comment on page_counter_charge */ + if (new > READ_ONCE(c->local_watermark)) { + WRITE_ONCE(c->local_watermark, new); + if (new > READ_ONCE(c->watermark)) + WRITE_ONCE(c->watermark, new); + } } return true; From patchwork Tue Jul 30 23:13:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Finkel X-Patchwork-Id: 13747967 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9EDBC3DA49 for ; Tue, 30 Jul 2024 23:13:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5373B6B0092; Tue, 30 Jul 2024 19:13:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4E44F6B008C; Tue, 30 Jul 2024 19:13:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 361666B0092; Tue, 30 Jul 2024 19:13:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 10B536B008A for ; Tue, 30 Jul 2024 19:13:25 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id A8BB240129 for ; Tue, 30 Jul 2024 23:13:24 +0000 (UTC) X-FDA: 82397972328.01.B2162E6 Received: from m35-116.mailgun.net (m35-116.mailgun.net [69.72.35.116]) by imf03.hostedemail.com (Postfix) with ESMTP id CB8E32000D for ; Tue, 30 Jul 2024 23:13:22 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=relay.vimeo.com header.s=mailo header.b=m92t2ATM; spf=pass (imf03.hostedemail.com: domain of "bounce+ea57f2.9d2a1c-linux-mm=kvack.org@relay.vimeo.com" designates 69.72.35.116 as permitted sender) smtp.mailfrom="bounce+ea57f2.9d2a1c-linux-mm=kvack.org@relay.vimeo.com"; dmarc=pass (policy=reject) header.from=vimeo.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722381175; a=rsa-sha256; cv=none; b=1mXA+WmciNXxXBOwB8KD+sQUh0B5/YIrSWyif9CITpurzDNciQDZlXepEsieYl8MK/QrwI jkEu80TH74Fk8jsj14s6Z0qD0+rad29Bg+5wStjJFn/O5AKZvRQpZNXKgrXGTsFpzJ94nl 6zz0B4NJX6YkCuQlVSFaChu+6zJ2cBI= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=relay.vimeo.com header.s=mailo header.b=m92t2ATM; spf=pass (imf03.hostedemail.com: domain of "bounce+ea57f2.9d2a1c-linux-mm=kvack.org@relay.vimeo.com" designates 69.72.35.116 as permitted sender) smtp.mailfrom="bounce+ea57f2.9d2a1c-linux-mm=kvack.org@relay.vimeo.com"; dmarc=pass (policy=reject) header.from=vimeo.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722381175; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5VuF5itZLVwOiFWXgV8vz4vKMWjbPdc9udm+X796l9o=; b=hJ2GbIbUpa5ucSFiiwnYBbapSYLAAz7kEC35qTBshCSyyiaKdc7dmua3yDDgXGU2Ugb710 Iu1BrcQzM+UUWiaFeZdhSYF0QaufT3isAGv2UJSjCESgC57d3BSwFDHdGyN+eQOTNdCznU 4O/VkbGAGpavlBXWGu+QVt1Lz1Ws7w0= DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=relay.vimeo.com; q=dns/txt; s=mailo; t=1722381201; x=1722388401; h=Content-Transfer-Encoding: MIME-Version: References: In-Reply-To: Message-Id: Date: Subject: Subject: Cc: To: To: From: From: Sender: Sender; bh=5VuF5itZLVwOiFWXgV8vz4vKMWjbPdc9udm+X796l9o=; b=m92t2ATMJ4NXPGo/NY/DvoPy1IxHJijXZmIWeciNhCtOBfV5UGQyc6aXhRBkKyT/aiIZx0zs4+M8fGnzECb5mxZcJDMIuSKlvEAAnJEheT/GNYr+L4QaMsl/ZdUmJ4rcEdFnpJkZ+DMGluJMJjRSvND3DaUqaGu40uC4aSX/dQ0= X-Mailgun-Sending-Ip: 69.72.35.116 X-Mailgun-Sid: WyI5NTRmYiIsImxpbnV4LW1tQGt2YWNrLm9yZyIsIjlkMmExYyJd Received: from smtp.vimeo.com (215.71.185.35.bc.googleusercontent.com [35.185.71.215]) by 57088597d3cb with SMTP id 66a97391e961ac88710c97e6 (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Tue, 30 Jul 2024 23:13:21 GMT Received: from nutau (gke-sre-us-east1-main-4c35368b-oa2b.c.vimeo-core.internal [10.56.27.210]) by smtp.vimeo.com (Postfix) with ESMTP id 99D8865CC2; Tue, 30 Jul 2024 23:13:21 +0000 (UTC) Received: by nutau (Postfix, from userid 1001) id 5EA14B40AC9; Tue, 30 Jul 2024 19:13:21 -0400 (EDT) From: David Finkel To: Muchun Song , Tejun Heo , Roman Gushchin , Andrew Morton Cc: core-services@vimeo.com, Jonathan Corbet , Michal Hocko , Shakeel Butt , Shuah Khan , Johannes Weiner , Zefan Li , cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, =?utf-8?q?Michal_Koutn=C3=BD?= , David Finkel Subject: [PATCH v7 2/2] mm, memcg: cg2 memory{.swap,}.peak write tests Date: Tue, 30 Jul 2024 19:13:04 -0400 Message-Id: <20240730231304.761942-3-davidf@vimeo.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20240730231304.761942-1-davidf@vimeo.com> References: <20240730231304.761942-1-davidf@vimeo.com> MIME-Version: 1.0 X-Stat-Signature: 5ewxp3qehdaer75pck4xi64f4yz68qr9 X-Rspamd-Queue-Id: CB8E32000D X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1722381202-223520 X-HE-Meta: U2FsdGVkX1/qncx7lKtVh9EzL0VeFtc17xA8E7U0y9BvdF/+a923f7YoLFUr3Zk2C1HRvElc4n2Uj54jkm6rZuZkFLa/AibDNzoylkRGJOySg+JjN5eZDIowo0FA1H84fQSsWiTGSB0qaPwLr1uBDI9IHdsnqflbhj4uC4c3vkLjyRq5haX/DItHhbl8Ryqs29nUtTWx+kz6GLSbXGMw2AFDZWbL28W1pCsBbCDwc94NuGkUjr/b774SEXdv5L+PHrnnLsV6caWj05Fobcn1RjnN5ZYw93/hsku1e0fyEhM6HcZthugMNr6lQb1tXcsy6tqyUZHvpKLD5NikRVaJHArwSE0FapLV5AYOQnum7qcHOIWtZD4tnR0siK8RBKOG0oLJnenlgBlNLZVIHlUVUf+zEp0ZyabbddLvxkEPm/EeGFc+glMbiNDfd/7v7pQ5bVAo8bm0qwpEmbgZHE6vB5Qq2ITtb70RQW8+FbCmE9FNx5xQ+IzK7Kh1EgwqT6PP5XOmCZHr11i7cKLMN+0P8Mmb4p9yBaMKE5s9tb2nU2EGBFqZ6n5XqvpXJPa35jnicZCFGrRkPD26VJYHH7ZWUQ7evYv0sOnEv3VPttkvWJOrZVGSgVIHpmBtWk7pWVfiAi4W1OeDQFj/Zh2X60MWGt9J5WJ/HlXDOkqpsW5rrno5qMzALzbVPYAKAFyiXxh4hLaKV2O26nfSPaQa5iHhFp9vJ4Vy/WHHUK2g944z+COYNCsDNogwLYTq0zfuefP8m+XtzPX79wokWz1t9OXn1jriUrH8L57tukHONE9G+tb9fY4XbjdOeGMixkolt+ZX3o0gMfhFVedTYoUgCAUZQmEYI381usCC2qJ9lMvxVFSEoeZhWCI9elUpQL0OfDOH/trvIotZmXH/VQt3dO4lI5ZWCSw3QgQrZfxY4cBEIMzHnrLS8X2e3YDIlHsMWZ1XDiSVkWlbW+e3Fqxkf1N 1nK5rY59 zVxoHtIXITibYQgALPV0oLWVIFJLSVZ1LOk1J6HKe+gFGUsHfD4NaLrl2zSbtHRQTwKjFbSHjl6zENQRE78VpDKzlClClBxQwS9JrzvxKe4yh8h5SjYZPkYD6udVcd13pvE1Oc6Oy2Gl71iZew8Tc9eJIbfZSDipclKPeyRaEif7SpMoaGJ2MIH9vN3cQsdOiPEtd8PEpgXC4+snZwWkgOHN8bXhbOi4suMR0IyxINZwkbgEBH/DUGCKfPVJ9ax4Idp3Xk7g+pj9NZSL8YOtbjbhxssc+VHrIdr6+9BOByQRkj2pFDFPIFb2DzTpdhDnQGEmTK+L1YB+aezaTfJSnsX80Bs4S8CEGRRgcwMvStzWG+oDH4eLosAoEO8Y86AYsJo/j1tT9BRMiAaX1CW+zc2/hJvQtLLJjQNySxayvmLEukMVHR2eIP/JVE7hnwDQppM83ksWngGR7p22MGp6cvq6luDEtsZz8T6FlwiLA/0764d8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000256, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Extend two existing tests to cover extracting memory usage through the newly mutable memory.peak and memory.swap.peak handlers. In particular, make sure to exercise adding and removing watchers with overlapping lifetimes so the less-trivial logic gets tested. The new/updated tests attempt to detect a lack of the write handler by fstat'ing the memory.peak and memory.swap.peak files and skip the tests if that's the case. Additionally, skip if the file doesn't exist at all. Signed-off-by: David Finkel --- tools/testing/selftests/cgroup/cgroup_util.c | 22 ++ tools/testing/selftests/cgroup/cgroup_util.h | 2 + .../selftests/cgroup/test_memcontrol.c | 264 +++++++++++++++++- 3 files changed, 280 insertions(+), 8 deletions(-) diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/cgroup_util.c index 432db923bced0..1e2d46636a0ca 100644 --- a/tools/testing/selftests/cgroup/cgroup_util.c +++ b/tools/testing/selftests/cgroup/cgroup_util.c @@ -141,6 +141,16 @@ long cg_read_long(const char *cgroup, const char *control) return atol(buf); } +long cg_read_long_fd(int fd) +{ + char buf[128]; + + if (pread(fd, buf, sizeof(buf), 0) <= 0) + return -1; + + return atol(buf); +} + long cg_read_key_long(const char *cgroup, const char *control, const char *key) { char buf[PAGE_SIZE]; @@ -183,6 +193,18 @@ int cg_write(const char *cgroup, const char *control, char *buf) return ret == len ? 0 : ret; } +/* + * Returns fd on success, or -1 on failure. + * (fd should be closed with close() as usual) + */ +int cg_open(const char *cgroup, const char *control, int flags) +{ + char path[PATH_MAX]; + + snprintf(path, sizeof(path), "%s/%s", cgroup, control); + return open(path, flags); +} + int cg_write_numeric(const char *cgroup, const char *control, long value) { char buf[64]; diff --git a/tools/testing/selftests/cgroup/cgroup_util.h b/tools/testing/selftests/cgroup/cgroup_util.h index e8d04ac9e3d23..19b131ee77072 100644 --- a/tools/testing/selftests/cgroup/cgroup_util.h +++ b/tools/testing/selftests/cgroup/cgroup_util.h @@ -34,9 +34,11 @@ extern int cg_read_strcmp(const char *cgroup, const char *control, extern int cg_read_strstr(const char *cgroup, const char *control, const char *needle); extern long cg_read_long(const char *cgroup, const char *control); +extern long cg_read_long_fd(int fd); long cg_read_key_long(const char *cgroup, const char *control, const char *key); extern long cg_read_lc(const char *cgroup, const char *control); extern int cg_write(const char *cgroup, const char *control, char *buf); +extern int cg_open(const char *cgroup, const char *control, int flags); int cg_write_numeric(const char *cgroup, const char *control, long value); extern int cg_run(const char *cgroup, int (*fn)(const char *cgroup, void *arg), diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c index 41ae8047b8895..16f5d74ae762e 100644 --- a/tools/testing/selftests/cgroup/test_memcontrol.c +++ b/tools/testing/selftests/cgroup/test_memcontrol.c @@ -161,13 +161,16 @@ static int alloc_pagecache_50M_check(const char *cgroup, void *arg) /* * This test create a memory cgroup, allocates * some anonymous memory and some pagecache - * and check memory.current and some memory.stat values. + * and checks memory.current, memory.peak, and some memory.stat values. */ -static int test_memcg_current(const char *root) +static int test_memcg_current_peak(const char *root) { int ret = KSFT_FAIL; - long current; + long current, peak, peak_reset; char *memcg; + bool fd2_closed = false, fd3_closed = false, fd4_closed = false; + int peak_fd = -1, peak_fd2 = -1, peak_fd3 = -1, peak_fd4 = -1; + struct stat ss; memcg = cg_name(root, "memcg_test"); if (!memcg) @@ -180,15 +183,124 @@ static int test_memcg_current(const char *root) if (current != 0) goto cleanup; + peak = cg_read_long(memcg, "memory.peak"); + if (peak != 0) + goto cleanup; + if (cg_run(memcg, alloc_anon_50M_check, NULL)) goto cleanup; + peak = cg_read_long(memcg, "memory.peak"); + if (peak < MB(50)) + goto cleanup; + + /* + * We'll open a few FDs for the same memory.peak file to exercise the free-path + * We need at least three to be closed in a different order than writes occurred to test + * the linked-list handling. + */ + peak_fd = cg_open(memcg, "memory.peak", O_RDWR | O_APPEND | O_CLOEXEC); + + if (peak_fd == -1) { + if (errno == ENOENT) + ret = KSFT_SKIP; + goto cleanup; + } + + /* + * Before we try to use memory.peak's fd, try to figure out whether + * this kernel supports writing to that file in the first place. (by + * checking the writable bit on the file's st_mode) + */ + if (fstat(peak_fd, &ss)) + goto cleanup; + + if ((ss.st_mode & S_IWUSR) == 0) { + ret = KSFT_SKIP; + goto cleanup; + } + + peak_fd2 = cg_open(memcg, "memory.peak", O_RDWR | O_APPEND | O_CLOEXEC); + + if (peak_fd2 == -1) + goto cleanup; + + peak_fd3 = cg_open(memcg, "memory.peak", O_RDWR | O_APPEND | O_CLOEXEC); + + if (peak_fd3 == -1) + goto cleanup; + + /* any non-empty string resets, but make it clear */ + static const char reset_string[] = "reset\n"; + + peak_reset = write(peak_fd, reset_string, sizeof(reset_string)); + if (peak_reset != sizeof(reset_string)) + goto cleanup; + + peak_reset = write(peak_fd2, reset_string, sizeof(reset_string)); + if (peak_reset != sizeof(reset_string)) + goto cleanup; + + peak_reset = write(peak_fd3, reset_string, sizeof(reset_string)); + if (peak_reset != sizeof(reset_string)) + goto cleanup; + + /* Make sure a completely independent read isn't affected by our FD-local reset above*/ + peak = cg_read_long(memcg, "memory.peak"); + if (peak < MB(50)) + goto cleanup; + + fd2_closed = true; + if (close(peak_fd2)) + goto cleanup; + + peak_fd4 = cg_open(memcg, "memory.peak", O_RDWR | O_APPEND | O_CLOEXEC); + + if (peak_fd4 == -1) + goto cleanup; + + peak_reset = write(peak_fd4, reset_string, sizeof(reset_string)); + if (peak_reset != sizeof(reset_string)) + goto cleanup; + + peak = cg_read_long_fd(peak_fd); + if (peak > MB(30) || peak < 0) + goto cleanup; + if (cg_run(memcg, alloc_pagecache_50M_check, NULL)) goto cleanup; + peak = cg_read_long(memcg, "memory.peak"); + if (peak < MB(50)) + goto cleanup; + + /* Make sure everything is back to normal */ + peak = cg_read_long_fd(peak_fd); + if (peak < MB(50)) + goto cleanup; + + peak = cg_read_long_fd(peak_fd4); + if (peak < MB(50)) + goto cleanup; + + fd3_closed = true; + if (close(peak_fd3)) + goto cleanup; + + fd4_closed = true; + if (close(peak_fd4)) + goto cleanup; + ret = KSFT_PASS; cleanup: + close(peak_fd); + if (!fd2_closed) + close(peak_fd2); + if (!fd3_closed) + close(peak_fd3); + if (!fd4_closed) + close(peak_fd4); cg_destroy(memcg); free(memcg); @@ -817,13 +929,19 @@ static int alloc_anon_50M_check_swap(const char *cgroup, void *arg) /* * This test checks that memory.swap.max limits the amount of - * anonymous memory which can be swapped out. + * anonymous memory which can be swapped out. Additionally, it verifies that + * memory.swap.peak reflects the high watermark and can be reset. */ -static int test_memcg_swap_max(const char *root) +static int test_memcg_swap_max_peak(const char *root) { int ret = KSFT_FAIL; char *memcg; - long max; + long max, peak; + struct stat ss; + int swap_peak_fd = -1, mem_peak_fd = -1; + + /* any non-empty string resets */ + static const char reset_string[] = "foobarbaz"; if (!is_swap_enabled()) return KSFT_SKIP; @@ -840,6 +958,61 @@ static int test_memcg_swap_max(const char *root) goto cleanup; } + swap_peak_fd = cg_open(memcg, "memory.swap.peak", + O_RDWR | O_APPEND | O_CLOEXEC); + + if (swap_peak_fd == -1) { + if (errno == ENOENT) + ret = KSFT_SKIP; + goto cleanup; + } + + /* + * Before we try to use memory.swap.peak's fd, try to figure out + * whether this kernel supports writing to that file in the first + * place. (by checking the writable bit on the file's st_mode) + */ + if (fstat(swap_peak_fd, &ss)) + goto cleanup; + + if ((ss.st_mode & S_IWUSR) == 0) { + ret = KSFT_SKIP; + goto cleanup; + } + + mem_peak_fd = cg_open(memcg, "memory.peak", O_RDWR | O_APPEND | O_CLOEXEC); + + if (mem_peak_fd == -1) + goto cleanup; + + if (cg_read_long(memcg, "memory.swap.peak")) + goto cleanup; + + if (cg_read_long_fd(swap_peak_fd)) + goto cleanup; + + /* switch the swap and mem fds into local-peak tracking mode*/ + int peak_reset = write(swap_peak_fd, reset_string, sizeof(reset_string)); + + if (peak_reset != sizeof(reset_string)) + goto cleanup; + + if (cg_read_long_fd(swap_peak_fd)) + goto cleanup; + + if (cg_read_long(memcg, "memory.peak")) + goto cleanup; + + if (cg_read_long_fd(mem_peak_fd)) + goto cleanup; + + peak_reset = write(mem_peak_fd, reset_string, sizeof(reset_string)); + if (peak_reset != sizeof(reset_string)) + goto cleanup; + + if (cg_read_long_fd(mem_peak_fd)) + goto cleanup; + if (cg_read_strcmp(memcg, "memory.max", "max\n")) goto cleanup; @@ -862,6 +1035,61 @@ static int test_memcg_swap_max(const char *root) if (cg_read_key_long(memcg, "memory.events", "oom_kill ") != 1) goto cleanup; + peak = cg_read_long(memcg, "memory.peak"); + if (peak < MB(29)) + goto cleanup; + + peak = cg_read_long(memcg, "memory.swap.peak"); + if (peak < MB(29)) + goto cleanup; + + peak = cg_read_long_fd(mem_peak_fd); + if (peak < MB(29)) + goto cleanup; + + peak = cg_read_long_fd(swap_peak_fd); + if (peak < MB(29)) + goto cleanup; + + /* + * open, reset and close the peak swap on another FD to make sure + * multiple extant fds don't corrupt the linked-list + */ + peak_reset = cg_write(memcg, "memory.swap.peak", (char *)reset_string); + if (peak_reset) + goto cleanup; + + peak_reset = cg_write(memcg, "memory.peak", (char *)reset_string); + if (peak_reset) + goto cleanup; + + /* actually reset on the fds */ + peak_reset = write(swap_peak_fd, reset_string, sizeof(reset_string)); + if (peak_reset != sizeof(reset_string)) + goto cleanup; + + peak_reset = write(mem_peak_fd, reset_string, sizeof(reset_string)); + if (peak_reset != sizeof(reset_string)) + goto cleanup; + + peak = cg_read_long_fd(swap_peak_fd); + if (peak > MB(10)) + goto cleanup; + + /* + * The cgroup is now empty, but there may be a page or two associated + * with the open FD accounted to it. + */ + peak = cg_read_long_fd(mem_peak_fd); + if (peak > MB(1)) + goto cleanup; + + if (cg_read_long(memcg, "memory.peak") < MB(29)) + goto cleanup; + + if (cg_read_long(memcg, "memory.swap.peak") < MB(29)) + goto cleanup; + if (cg_run(memcg, alloc_anon_50M_check_swap, (void *)MB(30))) goto cleanup; @@ -869,9 +1097,29 @@ static int test_memcg_swap_max(const char *root) if (max <= 0) goto cleanup; + peak = cg_read_long(memcg, "memory.peak"); + if (peak < MB(29)) + goto cleanup; + + peak = cg_read_long(memcg, "memory.swap.peak"); + if (peak < MB(29)) + goto cleanup; + + peak = cg_read_long_fd(mem_peak_fd); + if (peak < MB(29)) + goto cleanup; + + peak = cg_read_long_fd(swap_peak_fd); + if (peak < MB(19)) + goto cleanup; + ret = KSFT_PASS; cleanup: + if (mem_peak_fd != -1 && close(mem_peak_fd)) + ret = KSFT_FAIL; + if (swap_peak_fd != -1 && close(swap_peak_fd)) + ret = KSFT_FAIL; cg_destroy(memcg); free(memcg); @@ -1295,7 +1543,7 @@ struct memcg_test { const char *name; } tests[] = { T(test_memcg_subtree_control), - T(test_memcg_current), + T(test_memcg_current_peak), T(test_memcg_min), T(test_memcg_low), T(test_memcg_high), @@ -1303,7 +1551,7 @@ struct memcg_test { T(test_memcg_max), T(test_memcg_reclaim), T(test_memcg_oom_events), - T(test_memcg_swap_max), + T(test_memcg_swap_max_peak), T(test_memcg_sock), T(test_memcg_oom_group_leaf_events), T(test_memcg_oom_group_parent_events),