From patchwork Fri Sep 20 22:11:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kaiyang Zhao X-Patchwork-Id: 13808712 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41332CF9C68 for ; Fri, 20 Sep 2024 22:12:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8A6BB6B0088; Fri, 20 Sep 2024 18:12:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 856346B0089; Fri, 20 Sep 2024 18:12:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5E4E46B008A; Fri, 20 Sep 2024 18:12:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 360216B0088 for ; Fri, 20 Sep 2024 18:12:51 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id DC56C1C4AC2 for ; Fri, 20 Sep 2024 22:12:50 +0000 (UTC) X-FDA: 82586517300.19.089B57F Received: from mail-qv1-f42.google.com (mail-qv1-f42.google.com [209.85.219.42]) by imf21.hostedemail.com (Postfix) with ESMTP id 1AA0D1C000A for ; Fri, 20 Sep 2024 22:12:48 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=cs.cmu.edu header.s=google-2021 header.b=c91fx0Y7; dmarc=pass (policy=none) header.from=cs.cmu.edu; spf=pass (imf21.hostedemail.com: domain of kaiyang2@andrew.cmu.edu designates 209.85.219.42 as permitted sender) smtp.mailfrom=kaiyang2@andrew.cmu.edu ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1726870275; a=rsa-sha256; cv=none; b=WEnb5sL2qHN7vyoVTlwWpiees14xFMIWwmRNmelczc/OD7/NufyRNLdjOoUHk+zRsX0pPk IHN+vbWtcQ1HiD8NqW+enCofn5mvndbGBrEwx6EWjHwvUFEr90dTN7dwl3Sqln7J/aBc/s sk1yKq9JUSGJ5qWQL1ackyMPM689R40= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=cs.cmu.edu header.s=google-2021 header.b=c91fx0Y7; dmarc=pass (policy=none) header.from=cs.cmu.edu; spf=pass (imf21.hostedemail.com: domain of kaiyang2@andrew.cmu.edu designates 209.85.219.42 as permitted sender) smtp.mailfrom=kaiyang2@andrew.cmu.edu ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726870275; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uf3jgwkumzU+UPaFgIeEZ3O7SSYAkqNrO+ZAvwxaUDo=; b=WCFwfNKUxmtmpu0g6ANrG2zI34TRZOmC+6CG+xrdmITyDdERDd8kH8bNqd0kHV5X+b8BF4 xG7F3TSgnXoei95yI2fg79/abeO38bmXbGzdsgne1fou+NsgyM46zYb1SaVupHsQ+Mw73H bSBbn003xJSGC7i/S+lQ5S6tMgFrmYs= Received: by mail-qv1-f42.google.com with SMTP id 6a1803df08f44-6c34dd6c21aso15934706d6.2 for ; Fri, 20 Sep 2024 15:12:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cs.cmu.edu; s=google-2021; t=1726870368; x=1727475168; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uf3jgwkumzU+UPaFgIeEZ3O7SSYAkqNrO+ZAvwxaUDo=; b=c91fx0Y7mcCTt54D79SGCPYyOK25ZORlBT9jxBwE0kdLXcTruc5MCn0PzGRNwscaCH YHofiUkEkARfUOr3OfZVKtf30vU5ci5sNxXbHaWa/v/yRpI6DQO/bucrnHWIDVdUPIXZ 7lfoc7wpcUaHDsNx5JgjdURUlKLaPxTu13oH4tJYvmJWjQ8E/dDqjo+Fgy+jAxZa7jrx rgt+sGlasv9Y3rj+t0rL7bQfkyBQAV0zKP93NgMdC3m5khiOiFo/TOsz71Ec10r+hif1 IzGn2eH2RSYqL1I+hhfEa50nPf3aKRXE/Y56PGF1kXxm6ksGYNIfAvJsaFeXzb+N4Mfa 0aEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726870368; x=1727475168; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uf3jgwkumzU+UPaFgIeEZ3O7SSYAkqNrO+ZAvwxaUDo=; b=NNo84Y9qHlK077dKYoy6oHgFGloov5j8cktsTBXDPNGblNH1tbmf6TmATKpBNxgTiN T9nR5oCYGw7VUhgYAnoeG8HcKHw33odBeyEDUNvWKa5a+GgznhfUKU4Oze1u7inAGfY4 Yk3sGhcWYXyw1crDlb0RsTBKNGTlFIpVv6a5iQaM3plQs2ITmU17z/w5JbBEiDC0q/Jh rv2/Bi7M/wCO8CCCVdz5C9f7W5ZiR+cZYjERk/UJEnfFopaKXfuA3pOetN+9tu6bBO70 oH6l/BkOVuzyEpe7CNfFPAWsNLoijiYfv6DUyiP0miwSuo34Cb6koq+AwXihCtKfxzKC qlXw== X-Gm-Message-State: AOJu0YwFEQQFeLKzVa5OBbiXG8zt+5KqWdUZi0EXjPew+2u0+U7qX0I+ YEd0k1HaJGWyHJp7EzIct7FW5t0+kQc0v0Ugj6/6yjE0S6c1/gg0X9phOetx3JqGZUGa9sxNEHi TfVXyVjz7izP1JNqTWQygN3RvRhrqHUZMjzpJT96DwEXMlkg3wMGBU1Neel3d1ZNZTN5SPCX/Q9 Uo/XuLq02+J/aW6cP5VJXGVdr05U/PypXMDkhWYQ== X-Google-Smtp-Source: AGHT+IEyE6YRvGHqfK+QKR9XTMQex28jIRLr/J4ytHxbuqKZ8XfKVCXNbZWUk9cnyNyCrbUZvhYABQ== X-Received: by 2002:a05:6214:5781:b0:6c3:5496:3e06 with SMTP id 6a1803df08f44-6c7bc6a4c39mr80018546d6.10.1726870367936; Fri, 20 Sep 2024 15:12:47 -0700 (PDT) Received: from localhost (pool-74-98-231-160.pitbpa.fios.verizon.net. [74.98.231.160]) by smtp.gmail.com with UTF8SMTPSA id 6a1803df08f44-6c75e57a4acsm23134796d6.116.2024.09.20.15.12.47 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 20 Sep 2024 15:12:47 -0700 (PDT) From: kaiyang2@cs.cmu.edu To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, akpm@linux-foundation.org, mhocko@kernel.org, nehagholkar@meta.com, abhishekd@meta.com, hannes@cmpxchg.org, weixugc@google.com, rientjes@google.com, Kaiyang Zhao Subject: [RFC PATCH 2/4] calculate memory.low for the local node and track its usage Date: Fri, 20 Sep 2024 22:11:49 +0000 Message-ID: <20240920221202.1734227-3-kaiyang2@cs.cmu.edu> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240920221202.1734227-1-kaiyang2@cs.cmu.edu> References: <20240920221202.1734227-1-kaiyang2@cs.cmu.edu> MIME-Version: 1.0 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 1AA0D1C000A X-Stat-Signature: oise978btf918z9rr9hjrw7gw9fgo6ju X-Rspam-User: X-HE-Tag: 1726870368-923418 X-HE-Meta: U2FsdGVkX19LMFrYU9yiJq49dCYYf7pR4THFNMkOa2HBAjyH6xChKPkS3qDmliemBpfAThBqyTQd+4ZSbBTckoSEi4wzhvhx8KPwtOfqEqFQm+UNULxbYen6knD7hDOvg2oeCPYg4+jqJs19g9gBxoREn6JnMLwlePIv3DfprMOLZks2n8YzdfxWsu0FmEtUiDwb7QAlKqxo9JKQbaQ7io7rIk2eVJMm3LIsJl0tE97auiXLYnf8laVJGJngmQBgDZE94DP4wYasU9FjPYyLJr8YfsJqd8VL+OZAYy43J+GpACwO2XdPdIXjro9okf2B/p46oPSJnKL06NzH33i+5I/hJpzQ44Vybfrg+3l/Y2dZyg+y2wms7MwCfPLX0h+07BsD2G0NDSUj+DsSfXfJwpHVag3iaeussFVLhtDpy9/sH4ilYO5RCAPjA4djW4yKSY7d+8nIUyhhoNgq5lOcSMsicwg6SEZseWME2qYuZbAVLPCQtTaXq/i5c/E2g53Lp6JU17FDPJanUTHDtDA+pnRSXfiMVggKZ4aNc6Riu3Cw1IRRGhOeG+e7H5TO/XJBOm7/Y/p2ftL8+2xRh3Cl9Lfu2IIOg1WCmE5Znm3+EUF1cPE8byAlqfl4CswGF2vXdxITO60RpxSeCns5ooCvUbxWNxNkUeXC9/12HLl0Jz8BPaptBXmgwrrQMbuSr9/dy3xM8l5YhUKWIG7eosdAlsqDxx+8b7H64VvQDaUbLYHW1HoKACEi63+qaQNjGq+7VFdUqmtvlp4n6akRM71cPmLWS1dR/h7xzg6Wzb2JKruRacTbmu0fFf2L8Nlzmdw97d2s/unCBvx4qDReudJsR+GJlwIBB2nhT1S7t2kXMW5ksjLM2ZLqckPHbWC0wxHFOZKr3XMZfigVFyePsTxYS4lJLQuNGflSL2e7R3z8v4vWSERN4aqaSArQTD9kowD5MBNpV17dm2r+LGeJjUl yQVT3bDL g62GPzE7nuoFIwt/clsnXvqSaI/yp51FNhLbQYLnB1NA7HehulUsIwgJRt9SKiCaTAK3PxvBd0AFz4qRrOEfPLDJFI8pwbchstCrViRnFs6jCf6b9fTXNcjYOrDRCGLeBk6jZXVTZg7aybZXg5sczaIGy8YrVuK+aeU1VbnnH44pOHhf5die11ePdvab2Oo9eeUw7j4+d92CFwAMg2yzM6sRHWA5gYJ9qFtJjnOUW1XlYQfOHQcBMK2VRKUtULmgGHUR/nl7ybQ2qD2jNWdGnv5SzWcCSOrXmMU/uuf9QVaXZOsoHWMFTgyV+uKsklOicy1iJ3aujpTD06fk3yF0nvOEgkrGhHxNOL2xiDzTDxXCCehOz5Sr5dGWSBjUmrRreuAn5uDjRLTVm28dfXs3Rj5C+roDehQCk/lFa8XF6nzCO5Ks= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kaiyang Zhao Add a memory.low for the top-tier node (locallow) and track its usage. locallow is set by scaling low by the ratio of node 0 capacity and node 0 + node 1 capacity. Signed-off-by: Kaiyang Zhao --- include/linux/page_counter.h | 16 ++++++++--- mm/hugetlb_cgroup.c | 4 +-- mm/memcontrol.c | 42 ++++++++++++++++++++++------- mm/page_counter.c | 52 ++++++++++++++++++++++++++++-------- 4 files changed, 88 insertions(+), 26 deletions(-) diff --git a/include/linux/page_counter.h b/include/linux/page_counter.h index 79dbd8bc35a7..aa56c93415ef 100644 --- a/include/linux/page_counter.h +++ b/include/linux/page_counter.h @@ -13,6 +13,7 @@ struct page_counter { * memcg->memory.usage is a hot member of struct mem_cgroup. */ atomic_long_t usage; + struct mem_cgroup *memcg; /* memcg that owns this counter */ CACHELINE_PADDING(_pad1_); /* effective memory.min and memory.min usage tracking */ @@ -25,6 +26,10 @@ struct page_counter { atomic_long_t low_usage; atomic_long_t children_low_usage; + unsigned long elocallow; + atomic_long_t locallow_usage; + atomic_long_t children_locallow_usage; + unsigned long watermark; /* Latest cg2 reset watermark */ unsigned long local_watermark; @@ -36,6 +41,7 @@ struct page_counter { bool protection_support; unsigned long min; unsigned long low; + unsigned long locallow; unsigned long high; unsigned long max; struct page_counter *parent; @@ -52,12 +58,13 @@ struct page_counter { */ static inline void page_counter_init(struct page_counter *counter, struct page_counter *parent, - bool protection_support) + bool protection_support, struct mem_cgroup *memcg) { counter->usage = (atomic_long_t)ATOMIC_LONG_INIT(0); counter->max = PAGE_COUNTER_MAX; counter->parent = parent; counter->protection_support = protection_support; + counter->memcg = memcg; } static inline unsigned long page_counter_read(struct page_counter *counter) @@ -72,7 +79,8 @@ bool page_counter_try_charge(struct page_counter *counter, struct page_counter **fail); void page_counter_uncharge(struct page_counter *counter, unsigned long nr_pages); void page_counter_set_min(struct page_counter *counter, unsigned long nr_pages); -void page_counter_set_low(struct page_counter *counter, unsigned long nr_pages); +void page_counter_set_low(struct page_counter *counter, unsigned long nr_pages, + unsigned long nr_pages_local); static inline void page_counter_set_high(struct page_counter *counter, unsigned long nr_pages) @@ -99,11 +107,11 @@ static inline void page_counter_reset_watermark(struct page_counter *counter) #ifdef CONFIG_MEMCG void page_counter_calculate_protection(struct page_counter *root, struct page_counter *counter, - bool recursive_protection); + bool recursive_protection, int is_local); #else static inline void page_counter_calculate_protection(struct page_counter *root, struct page_counter *counter, - bool recursive_protection) {} + bool recursive_protection, int is_local) {} #endif #endif /* _LINUX_PAGE_COUNTER_H */ diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c index d8d0e665caed..0e07a7a1d5b8 100644 --- a/mm/hugetlb_cgroup.c +++ b/mm/hugetlb_cgroup.c @@ -114,10 +114,10 @@ static void hugetlb_cgroup_init(struct hugetlb_cgroup *h_cgroup, } page_counter_init(hugetlb_cgroup_counter_from_cgroup(h_cgroup, idx), - fault_parent, false); + fault_parent, false, NULL); page_counter_init( hugetlb_cgroup_counter_from_cgroup_rsvd(h_cgroup, idx), - rsvd_parent, false); + rsvd_parent, false, NULL); limit = round_down(PAGE_COUNTER_MAX, pages_per_huge_page(&hstates[idx])); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 20b715441332..d7c5fff12105 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1497,6 +1497,9 @@ static void memcg_stat_format(struct mem_cgroup *memcg, struct seq_buf *s) vm_event_name(memcg_vm_event_stat[i]), memcg_events(memcg, memcg_vm_event_stat[i])); } + + seq_buf_printf(s, "local_usage %lu\n", + get_cgroup_local_usage(memcg, true)); } static void memory_stat_format(struct mem_cgroup *memcg, struct seq_buf *s) @@ -3597,8 +3600,8 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) if (parent) { WRITE_ONCE(memcg->swappiness, mem_cgroup_swappiness(parent)); - page_counter_init(&memcg->memory, &parent->memory, true); - page_counter_init(&memcg->swap, &parent->swap, false); + page_counter_init(&memcg->memory, &parent->memory, true, memcg); + page_counter_init(&memcg->swap, &parent->swap, false, NULL); #ifdef CONFIG_MEMCG_V1 WRITE_ONCE(memcg->oom_kill_disable, READ_ONCE(parent->oom_kill_disable)); page_counter_init(&memcg->kmem, &parent->kmem, false); @@ -3607,8 +3610,8 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) } else { init_memcg_stats(); init_memcg_events(); - page_counter_init(&memcg->memory, NULL, true); - page_counter_init(&memcg->swap, NULL, false); + page_counter_init(&memcg->memory, NULL, true, memcg); + page_counter_init(&memcg->swap, NULL, false, NULL); #ifdef CONFIG_MEMCG_V1 page_counter_init(&memcg->kmem, NULL, false); page_counter_init(&memcg->tcpmem, NULL, false); @@ -3677,7 +3680,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) memcg1_css_offline(memcg); page_counter_set_min(&memcg->memory, 0); - page_counter_set_low(&memcg->memory, 0); + page_counter_set_low(&memcg->memory, 0, 0); zswap_memcg_offline_cleanup(memcg); @@ -3748,7 +3751,7 @@ static void mem_cgroup_css_reset(struct cgroup_subsys_state *css) page_counter_set_max(&memcg->tcpmem, PAGE_COUNTER_MAX); #endif page_counter_set_min(&memcg->memory, 0); - page_counter_set_low(&memcg->memory, 0); + page_counter_set_low(&memcg->memory, 0, 0); page_counter_set_high(&memcg->memory, PAGE_COUNTER_MAX); memcg1_soft_limit_reset(memcg); page_counter_set_high(&memcg->swap, PAGE_COUNTER_MAX); @@ -4051,6 +4054,12 @@ static ssize_t memory_min_write(struct kernfs_open_file *of, return nbytes; } +static int memory_locallow_show(struct seq_file *m, void *v) +{ + return seq_puts_memcg_tunable(m, + READ_ONCE(mem_cgroup_from_seq(m)->memory.locallow)); +} + static int memory_low_show(struct seq_file *m, void *v) { return seq_puts_memcg_tunable(m, @@ -4061,7 +4070,8 @@ static ssize_t memory_low_write(struct kernfs_open_file *of, char *buf, size_t nbytes, loff_t off) { struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); - unsigned long low; + struct sysinfo si; + unsigned long low, locallow, local_capacity, total_capacity; int err; buf = strstrip(buf); @@ -4069,7 +4079,15 @@ static ssize_t memory_low_write(struct kernfs_open_file *of, if (err) return err; - page_counter_set_low(&memcg->memory, low); + /* Hardcoded 0 for local node and 1 for remote. */ + si_meminfo_node(&si, 0); + local_capacity = si.totalram; /* In pages. */ + total_capacity = local_capacity; + si_meminfo_node(&si, 1); + total_capacity += si.totalram; + locallow = low * local_capacity / total_capacity; + + page_counter_set_low(&memcg->memory, low, locallow); return nbytes; } @@ -4394,6 +4412,11 @@ static struct cftype memory_files[] = { .seq_show = memory_low_show, .write = memory_low_write, }, + { + .name = "locallow", + .flags = CFTYPE_NOT_ON_ROOT, + .seq_show = memory_locallow_show, + }, { .name = "high", .flags = CFTYPE_NOT_ON_ROOT, @@ -4483,7 +4506,8 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, if (!root) root = root_mem_cgroup; - page_counter_calculate_protection(&root->memory, &memcg->memory, recursive_protection); + page_counter_calculate_protection(&root->memory, &memcg->memory, + recursive_protection, false); } static int charge_memcg(struct folio *folio, struct mem_cgroup *memcg, diff --git a/mm/page_counter.c b/mm/page_counter.c index b249d15af9dd..97205aafab46 100644 --- a/mm/page_counter.c +++ b/mm/page_counter.c @@ -18,8 +18,10 @@ static bool track_protection(struct page_counter *c) return c->protection_support; } +extern unsigned long get_cgroup_local_usage(struct mem_cgroup *memcg, bool flush); + static void propagate_protected_usage(struct page_counter *c, - unsigned long usage) + unsigned long usage, unsigned long local_usage) { unsigned long protected, old_protected; long delta; @@ -44,6 +46,15 @@ static void propagate_protected_usage(struct page_counter *c, if (delta) atomic_long_add(delta, &c->parent->children_low_usage); } + + protected = min(local_usage, READ_ONCE(c->locallow)); + old_protected = atomic_long_read(&c->locallow_usage); + if (protected != old_protected) { + old_protected = atomic_long_xchg(&c->locallow_usage, protected); + delta = protected - old_protected; + if (delta) + atomic_long_add(delta, &c->parent->children_locallow_usage); + } } /** @@ -63,7 +74,8 @@ void page_counter_cancel(struct page_counter *counter, unsigned long nr_pages) atomic_long_set(&counter->usage, new); } if (track_protection(counter)) - propagate_protected_usage(counter, new); + propagate_protected_usage(counter, new, + get_cgroup_local_usage(counter->memcg, false)); } /** @@ -83,7 +95,8 @@ void page_counter_charge(struct page_counter *counter, unsigned long nr_pages) new = atomic_long_add_return(nr_pages, &c->usage); if (protection) - propagate_protected_usage(c, new); + propagate_protected_usage(c, new, + get_cgroup_local_usage(counter->memcg, false)); /* * This is indeed racy, but we can live with some * inaccuracy in the watermark. @@ -151,7 +164,8 @@ bool page_counter_try_charge(struct page_counter *counter, goto failed; } if (protection) - propagate_protected_usage(c, new); + propagate_protected_usage(c, new, + get_cgroup_local_usage(counter->memcg, false)); /* see comment on page_counter_charge */ if (new > READ_ONCE(c->local_watermark)) { @@ -238,7 +252,8 @@ void page_counter_set_min(struct page_counter *counter, unsigned long nr_pages) WRITE_ONCE(counter->min, nr_pages); for (c = counter; c; c = c->parent) - propagate_protected_usage(c, atomic_long_read(&c->usage)); + propagate_protected_usage(c, atomic_long_read(&c->usage), + get_cgroup_local_usage(counter->memcg, false)); } /** @@ -248,14 +263,17 @@ void page_counter_set_min(struct page_counter *counter, unsigned long nr_pages) * * The caller must serialize invocations on the same counter. */ -void page_counter_set_low(struct page_counter *counter, unsigned long nr_pages) +void page_counter_set_low(struct page_counter *counter, unsigned long nr_pages, + unsigned long nr_pages_local) { struct page_counter *c; WRITE_ONCE(counter->low, nr_pages); + WRITE_ONCE(counter->locallow, nr_pages_local); for (c = counter; c; c = c->parent) - propagate_protected_usage(c, atomic_long_read(&c->usage)); + propagate_protected_usage(c, atomic_long_read(&c->usage), + get_cgroup_local_usage(counter->memcg, false)); } /** @@ -421,9 +439,9 @@ static unsigned long effective_protection(unsigned long usage, */ void page_counter_calculate_protection(struct page_counter *root, struct page_counter *counter, - bool recursive_protection) + bool recursive_protection, int is_local) { - unsigned long usage, parent_usage; + unsigned long usage, parent_usage, local_usage, parent_local_usage; struct page_counter *parent = counter->parent; /* @@ -437,16 +455,19 @@ void page_counter_calculate_protection(struct page_counter *root, return; usage = page_counter_read(counter); - if (!usage) + local_usage = get_cgroup_local_usage(counter->memcg, true); + if (!usage || !local_usage) return; if (parent == root) { counter->emin = READ_ONCE(counter->min); counter->elow = READ_ONCE(counter->low); + counter->elocallow = READ_ONCE(counter->locallow); return; } parent_usage = page_counter_read(parent); + parent_local_usage = get_cgroup_local_usage(parent->memcg, true); WRITE_ONCE(counter->emin, effective_protection(usage, parent_usage, READ_ONCE(counter->min), @@ -454,7 +475,16 @@ void page_counter_calculate_protection(struct page_counter *root, atomic_long_read(&parent->children_min_usage), recursive_protection)); - WRITE_ONCE(counter->elow, effective_protection(usage, parent_usage, + if (is_local) + WRITE_ONCE(counter->elocallow, + effective_protection(local_usage, parent_local_usage, + READ_ONCE(counter->locallow), + READ_ONCE(parent->elocallow), + atomic_long_read(&parent->children_locallow_usage), + recursive_protection)); + else + WRITE_ONCE(counter->elow, + effective_protection(usage, parent_usage, READ_ONCE(counter->low), READ_ONCE(parent->elow), atomic_long_read(&parent->children_low_usage),