From patchwork Mon Apr 5 17:08:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tim Chen X-Patchwork-Id: 12183455 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9647C433ED for ; Mon, 5 Apr 2021 18:09:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3BD0661246 for ; Mon, 5 Apr 2021 18:09:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3BD0661246 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C82A66B007B; Mon, 5 Apr 2021 14:09:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B6EB26B007D; Mon, 5 Apr 2021 14:09:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 99A236B007E; Mon, 5 Apr 2021 14:09:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0077.hostedemail.com [216.40.44.77]) by kanga.kvack.org (Postfix) with ESMTP id 71E056B007B for ; Mon, 5 Apr 2021 14:09:03 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3082A180AD83B for ; Mon, 5 Apr 2021 18:09:03 +0000 (UTC) X-FDA: 77999099766.09.CFD25C0 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by imf06.hostedemail.com (Postfix) with ESMTP id 35E54C0007C6 for ; Mon, 5 Apr 2021 18:09:03 +0000 (UTC) IronPort-SDR: XBxwNWN57oJQGiBsb1KHOD/RXzt2ybRj0avF1aRQggf/RiZO5aHcKB+Aa28Q87eaeCWJPp4bi+ gEJh69LErlCw== X-IronPort-AV: E=McAfee;i="6000,8403,9945"; a="172968204" X-IronPort-AV: E=Sophos;i="5.81,307,1610438400"; d="scan'208";a="172968204" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2021 11:09:02 -0700 IronPort-SDR: qgI1AWasyGMiSQyPd1dlw+IO/g4dA+Sz56r88lb4ZjpUINjjsmOf13+aHJTagLpbV3IOggN1qe nqvLqza+TQBw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,307,1610438400"; d="scan'208";a="448153892" Received: from skl-02.jf.intel.com ([10.54.74.28]) by fmsmga002.fm.intel.com with ESMTP; 05 Apr 2021 11:09:01 -0700 From: Tim Chen To: Michal Hocko Cc: Tim Chen , Johannes Weiner , Andrew Morton , Dave Hansen , Ying Huang , Dan Williams , David Rientjes , Shakeel Butt , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v1 03/11] mm: Account the top tier memory usage per cgroup Date: Mon, 5 Apr 2021 10:08:27 -0700 Message-Id: X-Mailer: git-send-email 2.20.1 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 35E54C0007C6 X-Stat-Signature: dnsqiyoak4ss336snfub8bbezpoitysa Received-SPF: none (linux.intel.com>: No applicable sender policy available) receiver=imf06; identity=mailfrom; envelope-from=""; helo=mga17.intel.com; client-ip=192.55.52.151 X-HE-DKIM-Result: none/none X-HE-Tag: 1617646143-148985 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For each memory cgroup, account its usage of the top tier memory at the time a top tier page is assigned and uncharged from the cgroup. Signed-off-by: Tim Chen --- include/linux/memcontrol.h | 1 + mm/memcontrol.c | 39 +++++++++++++++++++++++++++++++++++++- 2 files changed, 39 insertions(+), 1 deletion(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 25d8b9acec7c..609d8590950c 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -225,6 +225,7 @@ struct mem_cgroup { /* Legacy consumer-oriented counters */ struct page_counter kmem; /* v1 only */ struct page_counter tcpmem; /* v1 only */ + struct page_counter toptier; /* Range enforcement for interrupt charges */ struct work_struct high_work; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 9a9d677a6654..fe7bb8613f5a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -253,6 +253,13 @@ struct cgroup_subsys_state *vmpressure_to_css(struct vmpressure *vmpr) return &container_of(vmpr, struct mem_cgroup, vmpressure)->css; } +static inline bool top_tier(struct page *page) +{ + int nid = page_to_nid(page); + + return node_state(nid, N_TOPTIER); +} + #ifdef CONFIG_MEMCG_KMEM extern spinlock_t css_set_lock; @@ -951,6 +958,23 @@ static void mem_cgroup_charge_statistics(struct mem_cgroup *memcg, __this_cpu_add(memcg->vmstats_percpu->nr_page_events, nr_pages); } +static inline void mem_cgroup_charge_toptier(struct mem_cgroup *memcg, + struct page *page, + int nr_pages) +{ + if (!top_tier(page)) + return; + + if (nr_pages >= 0) + page_counter_charge(&memcg->toptier, + (unsigned long) nr_pages); + else { + nr_pages = -nr_pages; + page_counter_uncharge(&memcg->toptier, + (unsigned long) nr_pages); + } +} + static bool mem_cgroup_event_ratelimit(struct mem_cgroup *memcg, enum mem_cgroup_events_target target) { @@ -2932,6 +2956,7 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg) * - exclusive reference */ page->memcg_data = (unsigned long)memcg; + mem_cgroup_charge_toptier(memcg, page, thp_nr_pages(page)); } #ifdef CONFIG_MEMCG_KMEM @@ -3138,6 +3163,7 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order) if (!ret) { page->memcg_data = (unsigned long)memcg | MEMCG_DATA_KMEM; + mem_cgroup_charge_toptier(memcg, page, 1 << order); return 0; } css_put(&memcg->css); @@ -3161,6 +3187,7 @@ void __memcg_kmem_uncharge_page(struct page *page, int order) VM_BUG_ON_PAGE(mem_cgroup_is_root(memcg), page); __memcg_kmem_uncharge(memcg, nr_pages); page->memcg_data = 0; + mem_cgroup_charge_toptier(memcg, page, -nr_pages); css_put(&memcg->css); } @@ -5389,11 +5416,13 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) page_counter_init(&memcg->memory, &parent->memory); page_counter_init(&memcg->swap, &parent->swap); + page_counter_init(&memcg->toptier, &parent->toptier); page_counter_init(&memcg->kmem, &parent->kmem); page_counter_init(&memcg->tcpmem, &parent->tcpmem); } else { page_counter_init(&memcg->memory, NULL); page_counter_init(&memcg->swap, NULL); + page_counter_init(&memcg->toptier, NULL); page_counter_init(&memcg->kmem, NULL); page_counter_init(&memcg->tcpmem, NULL); @@ -5745,6 +5774,8 @@ static int mem_cgroup_move_account(struct page *page, css_put(&from->css); page->memcg_data = (unsigned long)to; + mem_cgroup_charge_toptier(to, page, nr_pages); + mem_cgroup_charge_toptier(from, page, -nr_pages); __unlock_page_memcg(from); @@ -6832,6 +6863,7 @@ struct uncharge_gather { unsigned long nr_pages; unsigned long pgpgout; unsigned long nr_kmem; + unsigned long nr_toptier; struct page *dummy_page; }; @@ -6846,6 +6878,7 @@ static void uncharge_batch(const struct uncharge_gather *ug) if (!mem_cgroup_is_root(ug->memcg)) { page_counter_uncharge(&ug->memcg->memory, ug->nr_pages); + page_counter_uncharge(&ug->memcg->toptier, ug->nr_toptier); if (do_memsw_account()) page_counter_uncharge(&ug->memcg->memsw, ug->nr_pages); if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && ug->nr_kmem) @@ -6891,6 +6924,8 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) nr_pages = compound_nr(page); ug->nr_pages += nr_pages; + if (top_tier(page)) + ug->nr_toptier += nr_pages; if (PageMemcgKmem(page)) ug->nr_kmem += nr_pages; @@ -7216,8 +7251,10 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) page->memcg_data = 0; - if (!mem_cgroup_is_root(memcg)) + if (!mem_cgroup_is_root(memcg)) { page_counter_uncharge(&memcg->memory, nr_entries); + mem_cgroup_charge_toptier(memcg, page, -nr_entries); + } if (!cgroup_memory_noswap && memcg != swap_memcg) { if (!mem_cgroup_is_root(swap_memcg))