From patchwork Thu Apr 1 18:32:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12179315 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB9A0C433B4 for ; Thu, 1 Apr 2021 18:35:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7158F601FC for ; Thu, 1 Apr 2021 18:35:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7158F601FC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0F40E6B00AE; Thu, 1 Apr 2021 14:35:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0A4BE6B00AF; Thu, 1 Apr 2021 14:35:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E60826B00B0; Thu, 1 Apr 2021 14:35:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0159.hostedemail.com [216.40.44.159]) by kanga.kvack.org (Postfix) with ESMTP id C65426B00AE for ; Thu, 1 Apr 2021 14:35:27 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 808121837DCD0 for ; Thu, 1 Apr 2021 18:35:27 +0000 (UTC) X-FDA: 77984651094.38.B15AB6E Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf30.hostedemail.com (Postfix) with ESMTP id CC004E00010B for ; Thu, 1 Apr 2021 18:35:24 +0000 (UTC) IronPort-SDR: hzVrkB364EglPGfvNWs/PKZwvk130FsCdnQinT7ZvlKsJ3zno4VDIwq2JETlwpIwE7xrNsbvxQ Pz6Zu+JFLCRw== X-IronPort-AV: E=McAfee;i="6000,8403,9941"; a="190077481" X-IronPort-AV: E=Sophos;i="5.81,296,1610438400"; d="scan'208";a="190077481" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Apr 2021 11:35:15 -0700 IronPort-SDR: jb4EYmBM3xnvyXm1cU7TVJzlpbBHz7DA9S4CyT7hKiXrg9w0ycHN15yLBwOuxZjlDa6ZlfL8kH 0pmytepETlFw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,296,1610438400"; d="scan'208";a="377806278" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by orsmga003.jf.intel.com with ESMTP; 01 Apr 2021 11:35:14 -0700 Subject: [PATCH 09/10] mm/vmscan: never demote for memcg reclaim To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org,Dave Hansen ,yang.shi@linux.alibaba.com,shy828301@gmail.com,weixugc@google.com,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com,david@redhat.com,osalvador@suse.de From: Dave Hansen Date: Thu, 01 Apr 2021 11:32:33 -0700 References: <20210401183216.443C4443@viggo.jf.intel.com> In-Reply-To: <20210401183216.443C4443@viggo.jf.intel.com> Message-Id: <20210401183233.64A91C97@viggo.jf.intel.com> X-Rspamd-Queue-Id: CC004E00010B X-Stat-Signature: 4dneynuneeq9t8mzueufj5kfthgnnoqf X-Rspamd-Server: rspam02 Received-SPF: none (linux.intel.com>: No applicable sender policy available) receiver=imf30; identity=mailfrom; envelope-from=""; helo=mga04.intel.com; client-ip=192.55.52.120 X-HE-DKIM-Result: none/none X-HE-Tag: 1617302124-22488 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen Global reclaim aims to reduce the amount of memory used on a given node or set of nodes. Migrating pages to another node serves this purpose. memcg reclaim is different. Its goal is to reduce the total memory consumption of the entire memcg, across all nodes. Migration does not assist memcg reclaim because it just moves page contents between nodes rather than actually reducing memory consumption. Signed-off-by: Dave Hansen Suggested-by: Yang Shi Reviewed-by: Yang Shi Cc: Wei Xu Cc: David Rientjes Cc: Huang Ying Cc: Dan Williams Cc: David Hildenbrand Cc: osalvador --- b/mm/vmscan.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff -puN mm/vmscan.c~never-demote-for-memcg-reclaim mm/vmscan.c --- a/mm/vmscan.c~never-demote-for-memcg-reclaim 2021-03-31 15:17:20.476000239 -0700 +++ b/mm/vmscan.c 2021-03-31 15:17:20.487000239 -0700 @@ -288,7 +288,8 @@ static bool writeback_throttling_sane(st #endif static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg, - int node_id) + int node_id, + struct scan_control *sc) { if (memcg == NULL) { /* @@ -326,7 +327,7 @@ unsigned long zone_reclaimable_pages(str nr = zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_FILE) + zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_FILE); - if (can_reclaim_anon_pages(NULL, zone_to_nid(zone))) + if (can_reclaim_anon_pages(NULL, zone_to_nid(zone), NULL)) nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) + zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON); @@ -1064,7 +1065,8 @@ static enum page_references page_check_r return PAGEREF_RECLAIM; } -static bool migrate_demote_page_ok(struct page *page) +static bool migrate_demote_page_ok(struct page *page, + struct scan_control *sc) { int next_nid = next_demotion_node(page_to_nid(page)); @@ -1072,6 +1074,10 @@ static bool migrate_demote_page_ok(struc VM_BUG_ON_PAGE(PageHuge(page), page); VM_BUG_ON_PAGE(PageLRU(page), page); + /* It is pointless to do demotion in memcg reclaim */ + if (cgroup_reclaim(sc)) + return false; + if (next_nid == NUMA_NO_NODE) return false; if (PageTransHuge(page) && !thp_migration_supported()) @@ -1328,7 +1334,7 @@ retry: * Before reclaiming the page, try to relocate * its contents to another node. */ - if (do_demote_pass && migrate_demote_page_ok(page)) { + if (do_demote_pass && migrate_demote_page_ok(page, sc)) { list_add(&page->lru, &demote_pages); unlock_page(page); continue; @@ -2362,7 +2368,7 @@ static void get_scan_count(struct lruvec enum lru_list lru; /* If we have no swap space, do not bother scanning anon pages. */ - if (!sc->may_swap || !can_reclaim_anon_pages(memcg, pgdat->node_id)) { + if (!sc->may_swap || !can_reclaim_anon_pages(memcg, pgdat->node_id, sc)) { scan_balance = SCAN_FILE; goto out; } @@ -2737,7 +2743,7 @@ static inline bool should_continue_recla */ pages_for_compaction = compact_gap(sc->order); inactive_lru_pages = node_page_state(pgdat, NR_INACTIVE_FILE); - if (can_reclaim_anon_pages(NULL, pgdat->node_id)) + if (can_reclaim_anon_pages(NULL, pgdat->node_id, sc)) inactive_lru_pages += node_page_state(pgdat, NR_INACTIVE_ANON); return inactive_lru_pages > pages_for_compaction;