From patchwork Thu Nov 11 04:15:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 12614055 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD771C433EF for ; Thu, 11 Nov 2021 04:15:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 79A09619E5 for ; Thu, 11 Nov 2021 04:15:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 79A09619E5 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 794336B00B4; Wed, 10 Nov 2021 23:15:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 731BC6B00B5; Wed, 10 Nov 2021 23:15:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 55E036B00B7; Wed, 10 Nov 2021 23:15:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0160.hostedemail.com [216.40.44.160]) by kanga.kvack.org (Postfix) with ESMTP id 450B16B00B4 for ; Wed, 10 Nov 2021 23:15:21 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id F1DF4184FA8BD for ; Thu, 11 Nov 2021 04:15:20 +0000 (UTC) X-FDA: 78795334800.26.8C99837 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf25.hostedemail.com (Postfix) with ESMTP id E6AD9B00019A for ; Thu, 11 Nov 2021 04:15:08 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id t24-20020a252d18000000b005c225ae9e16so7520531ybt.15 for ; Wed, 10 Nov 2021 20:15:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=kkYXutnF3tqIDgP/+md0LwFCRHQYe48hIbECc2/BN/g=; b=LjoYxRZJYGByr+juxNeKldPgxBmLVsfVRXHrmp+7FV3oJhOJ8obUU4xaVWVIDbQ3uk SjX0YFAuTOrg6sQNKDPmivE/csYeEyFQkhjz2AvEjOvDbennQj+TxGtBV5KsnSMemAwW 7ni5j0xum/RctjppDOQRr+XxLwEaGg8QT7ZDO+wahZCfX1Z56MQc5VWklkSw6vCh0qwx mTatUMaAKsq/hAKH32rwujCvNY3b3IyiZPFBaxraFFHFjd6tSZVGTFgfM6RjW4bI6bc9 VW3xjO/bvg1eUKFFwhmVwTr0RmHc0eqDTRjxvotdl5fRrEVvVjtibZ5dbUqcQcBwC0WY egZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kkYXutnF3tqIDgP/+md0LwFCRHQYe48hIbECc2/BN/g=; b=Tf++tAO+ejGj1quo1YV5kGBUE+HXazpHJew9x/zY6iuI1VZurTnt9uVc/J77uUlDNf JiJ6wbYLH7iGeAxDBpC+F7NVbhw/gUgGCmm11VZmO/i50mCEeXkTCYXQJhTD4hfMl0G2 zj4pGpJYwpgCHZHO1xII+T+WKATfUKbHP8cXTDeIxT63R5x6qYpm1mSHQaW2jEjzj46j tAmyTgW+3Nt1WmeXE8SvLxEVFhLwYbM4PA/sN+mqyIMgQuxMID74DISsrOVRdCVuKv9P smT3ScJD8dy5CsULPfdEM/TCGYns9hoFn+Sx2Nuf3+Khz5h8gjPcVcVYXaTb7HF2sStj RN7g== X-Gm-Message-State: AOAM530ppOSLsVIaA2LePnajgq3KXJFz6KbCxzPE/SeiGjI8IOF4bcH7 C/9DzlJobjgr0sKpMQuSsdOrG0JvB9oL5rIiLik/IAMkmTuvqPL28o3cBzCdTvIMMCZbhhX10Be f+A1yY4cmyjgyZjIq6QNLK1Fh+tbaj5v2YJlFoHEhQ5t0z4UEX0wRdLbW X-Google-Smtp-Source: ABdhPJwUYfVo4AnRC14nO9b/ERoW/rwISJSmlGaWjVjwgv+GeM62fJuFNr4x956f5zWuu+0six7mwizy2uc= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:346b:bb72:659e:f91c]) (user=yuzhao job=sendgmr) by 2002:a25:f20e:: with SMTP id i14mr5134085ybe.366.1636604119885; Wed, 10 Nov 2021 20:15:19 -0800 (PST) Date: Wed, 10 Nov 2021 21:15:03 -0700 In-Reply-To: <20211111041510.402534-1-yuzhao@google.com> Message-Id: <20211111041510.402534-4-yuzhao@google.com> Mime-Version: 1.0 References: <20211111041510.402534-1-yuzhao@google.com> X-Mailer: git-send-email 2.34.0.rc0.344.g81b53c2807-goog Subject: [PATCH v5 03/10] mm/vmscan.c: refactor shrink_node() From: Yu Zhao To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, page-reclaim@google.com, holger@applied-asynchrony.com, iam@valdikss.org.ru, Yu Zhao , Konstantin Kharlamov Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=LjoYxRZJ; spf=pass (imf25.hostedemail.com: domain of 315iMYQYKCE4EAFxq4w44w1u.s421y3AD-220Bqs0.47w@flex--yuzhao.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=315iMYQYKCE4EAFxq4w44w1u.s421y3AD-220Bqs0.47w@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: E6AD9B00019A X-Stat-Signature: nkr4ckzg3jdx845byiryyqed7f9ffcdb X-HE-Tag: 1636604108-636375 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch refactors shrink_node(). This will make the upcoming changes to mm/vmscan.c more readable. Signed-off-by: Yu Zhao Tested-by: Konstantin Kharlamov --- mm/vmscan.c | 186 +++++++++++++++++++++++++++------------------------- 1 file changed, 98 insertions(+), 88 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 74296c2d1fed..802b6c7ce5b5 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2562,6 +2562,103 @@ enum scan_balance { SCAN_FILE, }; +static void prepare_scan_count(pg_data_t *pgdat, struct scan_control *sc) +{ + unsigned long file; + struct lruvec *target_lruvec; + + target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); + + /* + * Determine the scan balance between anon and file LRUs. + */ + spin_lock_irq(&target_lruvec->lru_lock); + sc->anon_cost = target_lruvec->anon_cost; + sc->file_cost = target_lruvec->file_cost; + spin_unlock_irq(&target_lruvec->lru_lock); + + /* + * Target desirable inactive:active list ratios for the anon + * and file LRU lists. + */ + if (!sc->force_deactivate) { + unsigned long refaults; + + refaults = lruvec_page_state(target_lruvec, + WORKINGSET_ACTIVATE_ANON); + if (refaults != target_lruvec->refaults[0] || + inactive_is_low(target_lruvec, LRU_INACTIVE_ANON)) + sc->may_deactivate |= DEACTIVATE_ANON; + else + sc->may_deactivate &= ~DEACTIVATE_ANON; + + /* + * When refaults are being observed, it means a new + * workingset is being established. Deactivate to get + * rid of any stale active pages quickly. + */ + refaults = lruvec_page_state(target_lruvec, + WORKINGSET_ACTIVATE_FILE); + if (refaults != target_lruvec->refaults[1] || + inactive_is_low(target_lruvec, LRU_INACTIVE_FILE)) + sc->may_deactivate |= DEACTIVATE_FILE; + else + sc->may_deactivate &= ~DEACTIVATE_FILE; + } else + sc->may_deactivate = DEACTIVATE_ANON | DEACTIVATE_FILE; + + /* + * If we have plenty of inactive file pages that aren't + * thrashing, try to reclaim those first before touching + * anonymous pages. + */ + file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE); + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE)) + sc->cache_trim_mode = 1; + else + sc->cache_trim_mode = 0; + + /* + * Prevent the reclaimer from falling into the cache trap: as + * cache pages start out inactive, every cache fault will tip + * the scan balance towards the file LRU. And as the file LRU + * shrinks, so does the window for rotation from references. + * This means we have a runaway feedback loop where a tiny + * thrashing file LRU becomes infinitely more attractive than + * anon pages. Try to detect this based on file LRU size. + */ + if (!cgroup_reclaim(sc)) { + unsigned long total_high_wmark = 0; + unsigned long free, anon; + int z; + + free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES); + file = node_page_state(pgdat, NR_ACTIVE_FILE) + + node_page_state(pgdat, NR_INACTIVE_FILE); + + for (z = 0; z < MAX_NR_ZONES; z++) { + struct zone *zone = &pgdat->node_zones[z]; + + if (!managed_zone(zone)) + continue; + + total_high_wmark += high_wmark_pages(zone); + } + + /* + * Consider anon: if that's low too, this isn't a + * runaway file reclaim problem, but rather just + * extreme pressure. Reclaim as per usual then. + */ + anon = node_page_state(pgdat, NR_INACTIVE_ANON); + + sc->file_is_tiny = + file + free <= total_high_wmark && + !(sc->may_deactivate & DEACTIVATE_ANON) && + anon >> sc->priority; + } +} + /* * Determine how aggressively the anon and file LRU lists should be * scanned. The relative value of each set of LRU lists is determined @@ -3032,7 +3129,6 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) unsigned long nr_reclaimed, nr_scanned; struct lruvec *target_lruvec; bool reclaimable = false; - unsigned long file; target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); @@ -3048,93 +3144,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) nr_reclaimed = sc->nr_reclaimed; nr_scanned = sc->nr_scanned; - /* - * Determine the scan balance between anon and file LRUs. - */ - spin_lock_irq(&target_lruvec->lru_lock); - sc->anon_cost = target_lruvec->anon_cost; - sc->file_cost = target_lruvec->file_cost; - spin_unlock_irq(&target_lruvec->lru_lock); - - /* - * Target desirable inactive:active list ratios for the anon - * and file LRU lists. - */ - if (!sc->force_deactivate) { - unsigned long refaults; - - refaults = lruvec_page_state(target_lruvec, - WORKINGSET_ACTIVATE_ANON); - if (refaults != target_lruvec->refaults[0] || - inactive_is_low(target_lruvec, LRU_INACTIVE_ANON)) - sc->may_deactivate |= DEACTIVATE_ANON; - else - sc->may_deactivate &= ~DEACTIVATE_ANON; - - /* - * When refaults are being observed, it means a new - * workingset is being established. Deactivate to get - * rid of any stale active pages quickly. - */ - refaults = lruvec_page_state(target_lruvec, - WORKINGSET_ACTIVATE_FILE); - if (refaults != target_lruvec->refaults[1] || - inactive_is_low(target_lruvec, LRU_INACTIVE_FILE)) - sc->may_deactivate |= DEACTIVATE_FILE; - else - sc->may_deactivate &= ~DEACTIVATE_FILE; - } else - sc->may_deactivate = DEACTIVATE_ANON | DEACTIVATE_FILE; - - /* - * If we have plenty of inactive file pages that aren't - * thrashing, try to reclaim those first before touching - * anonymous pages. - */ - file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE); - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE)) - sc->cache_trim_mode = 1; - else - sc->cache_trim_mode = 0; - - /* - * Prevent the reclaimer from falling into the cache trap: as - * cache pages start out inactive, every cache fault will tip - * the scan balance towards the file LRU. And as the file LRU - * shrinks, so does the window for rotation from references. - * This means we have a runaway feedback loop where a tiny - * thrashing file LRU becomes infinitely more attractive than - * anon pages. Try to detect this based on file LRU size. - */ - if (!cgroup_reclaim(sc)) { - unsigned long total_high_wmark = 0; - unsigned long free, anon; - int z; - - free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES); - file = node_page_state(pgdat, NR_ACTIVE_FILE) + - node_page_state(pgdat, NR_INACTIVE_FILE); - - for (z = 0; z < MAX_NR_ZONES; z++) { - struct zone *zone = &pgdat->node_zones[z]; - if (!managed_zone(zone)) - continue; - - total_high_wmark += high_wmark_pages(zone); - } - - /* - * Consider anon: if that's low too, this isn't a - * runaway file reclaim problem, but rather just - * extreme pressure. Reclaim as per usual then. - */ - anon = node_page_state(pgdat, NR_INACTIVE_ANON); - - sc->file_is_tiny = - file + free <= total_high_wmark && - !(sc->may_deactivate & DEACTIVATE_ANON) && - anon >> sc->priority; - } + prepare_scan_count(pgdat, sc); shrink_node_memcgs(pgdat, sc);