From patchwork Thu Mar 19 15:50:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenqiwu X-Patchwork-Id: 11447613 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D88766CA for ; Thu, 19 Mar 2020 15:50:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 961A120836 for ; Thu, 19 Mar 2020 15:50:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ODLmC2+F" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 961A120836 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AE5306B0005; Thu, 19 Mar 2020 11:50:35 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A95606B0007; Thu, 19 Mar 2020 11:50:35 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9840B6B000A; Thu, 19 Mar 2020 11:50:35 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0001.hostedemail.com [216.40.44.1]) by kanga.kvack.org (Postfix) with ESMTP id 7C4746B0005 for ; Thu, 19 Mar 2020 11:50:35 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 14952181AC9C6 for ; Thu, 19 Mar 2020 15:50:35 +0000 (UTC) X-FDA: 76612549230.14.blow94_50d1f10f063b X-Spam-Summary: 2,0,0,03e9e431a7081573,d41d8cd98f00b204,qiwuchen55@gmail.com,,RULES_HIT:2:41:69:355:379:541:800:960:966:968:973:988:989:1260:1345:1437:1535:1605:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2898:3138:3139:3140:3141:3142:3608:3865:3866:3867:3870:3871:4049:4119:4250:4321:4385:5007:6119:6261:6653:7576:8603:8957:9413:10004:11026:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:13161:13229:13972:14096:14110:14394:21080:21324:21325:21444:21451:21627:21666:21990,0,RBL:209.85.216.67:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: blow94_50d1f10f063b X-Filterd-Recvd-Size: 8877 Received: from mail-pj1-f67.google.com (mail-pj1-f67.google.com [209.85.216.67]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Thu, 19 Mar 2020 15:50:34 +0000 (UTC) Received: by mail-pj1-f67.google.com with SMTP id np9so1196855pjb.4 for ; Thu, 19 Mar 2020 08:50:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=1nji6oUNb7MS58h/sPiIrdIE0ay6qyyNJwWFk//krE8=; b=ODLmC2+Fva5e2bUuwNWdH+XkxcZvXmff+WKPMKfNv6OUIDZBgBYwqXjr5iP/NIyzIm 2iOtuL0araSANEAq3+hMhYSeI8YGEQCH4faTNMVMgBNkctBFwa2N54GhGeIDJcVWmcud mlwLeMwxCc2PeeLG+JEbI5OwYB+HA4S9oT8fhiXOPhuE2jJdAY+IsrTqofBTLENAOpsB 8020IVryzzETz54GPblv1CdtbDkVmBHJdyMOzvOU5jxYQL3WEY/o7q5xq4d17n7VefpA IXV65bclBTxrTxli0mMTebHJmHK/aiJAqg+sUG+78NJFdVSppKu/j5qMC8GwplcSbYMy rkPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=1nji6oUNb7MS58h/sPiIrdIE0ay6qyyNJwWFk//krE8=; b=A20Jhj2EcYP4jzzRROuCUObB9NtZav4CMolm3qGCHALLXsZDV4WHFQIKq5pp+wXTbV wCFSqByW6kCu0cnshLR1l8VzXwxPs1LjZUgngrszi4UCeFAKJ4VWO75psQrG/VnAwhTc kb+MSqkfLT58oBb1BjP7yZStIvU5XXqDpkK881sBPapToI8LF7S6Q1lM8wLDVN4HpFqB 346hdEhC8wlZ7lO7ZUbSnGrlKScO+rs+OWfmzm6BUzkkWA2K2uPAX25OIE3expY1ikhk 0G2A4bp8whrfuJX2ITQLu6GyiBI6mDktJH+gfVCJjMYD1Jq90MFj2fCjSGNUHO+gn2MQ olKg== X-Gm-Message-State: ANhLgQ1SkyUmisWic74LXBc1itQQWw6gV2B521z5bh6VVjY/3d3Lef+E jE8XLB6d9UUyhkAaAS1svMo= X-Google-Smtp-Source: ADFU+vusGhYCKNSVrwoQ3oeuY2Kvgr0rMQsgKTx7xWVrLgiH17NPlVvRGFEbOZmQXFncDBcoXZGaNA== X-Received: by 2002:a17:902:b604:: with SMTP id b4mr4038930pls.340.1584633033639; Thu, 19 Mar 2020 08:50:33 -0700 (PDT) Received: from localhost ([43.224.245.179]) by smtp.gmail.com with ESMTPSA id r12sm2849113pgu.93.2020.03.19.08.50.31 (version=TLS1_2 cipher=AES128-SHA bits=128/128); Thu, 19 Mar 2020 08:50:32 -0700 (PDT) From: qiwuchen55@gmail.com To: akpm@linux-foundation.org Cc: linux-mm@kvack.org, chenqiwu Subject: [PATCH] mm/vmscan: fix incorrect return type for cgroup_reclaim() Date: Thu, 19 Mar 2020 23:50:26 +0800 Message-Id: <1584633026-26288-1-git-send-email-qiwuchen55@gmail.com> X-Mailer: git-send-email 1.9.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: chenqiwu The return type of cgroup_reclaim() is bool, but the correct type should be struct mem_cgroup pointer. As a result, cgroup_reclaim() can be used to wrap sc->target_mem_cgroup in vmscan code. Signed-off-by: chenqiwu Acked-by: Chris Down --- mm/vmscan.c | 39 +++++++++++++++++++++------------------ 1 file changed, 21 insertions(+), 18 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index dca623d..c795fc3 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -238,7 +238,7 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) up_write(&shrinker_rwsem); } -static bool cgroup_reclaim(struct scan_control *sc) +static struct mem_cgroup *cgroup_reclaim(struct scan_control *sc) { return sc->target_mem_cgroup; } @@ -276,9 +276,9 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) { } -static bool cgroup_reclaim(struct scan_control *sc) +static struct mem_cgroup *cgroup_reclaim(struct scan_control *sc) { - return false; + return NULL; } static bool writeback_throttling_sane(struct scan_control *sc) @@ -984,7 +984,7 @@ static enum page_references page_check_references(struct page *page, int referenced_ptes, referenced_page; unsigned long vm_flags; - referenced_ptes = page_referenced(page, 1, sc->target_mem_cgroup, + referenced_ptes = page_referenced(page, 1, cgroup_reclaim(sc), &vm_flags); referenced_page = TestClearPageReferenced(page); @@ -1422,7 +1422,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, count_vm_event(PGLAZYFREED); count_memcg_page_event(page, PGLAZYFREED); } else if (!mapping || !__remove_mapping(mapping, page, true, - sc->target_mem_cgroup)) + cgroup_reclaim(sc))) goto keep_locked; unlock_page(page); @@ -1907,6 +1907,7 @@ static int current_may_throttle(void) enum vm_event_item item; struct pglist_data *pgdat = lruvec_pgdat(lruvec); struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat; + struct mem_cgroup *target_memcg = cgroup_reclaim(sc); bool stalled = false; while (unlikely(too_many_isolated(pgdat, file, sc))) { @@ -1933,7 +1934,7 @@ static int current_may_throttle(void) reclaim_stat->recent_scanned[file] += nr_taken; item = current_is_kswapd() ? PGSCAN_KSWAPD : PGSCAN_DIRECT; - if (!cgroup_reclaim(sc)) + if (!target_memcg) __count_vm_events(item, nr_scanned); __count_memcg_events(lruvec_memcg(lruvec), item, nr_scanned); spin_unlock_irq(&pgdat->lru_lock); @@ -1947,7 +1948,7 @@ static int current_may_throttle(void) spin_lock_irq(&pgdat->lru_lock); item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; - if (!cgroup_reclaim(sc)) + if (!target_memcg) __count_vm_events(item, nr_reclaimed); __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed); reclaim_stat->recent_rotated[0] += stat.nr_activate[0]; @@ -2041,7 +2042,7 @@ static void shrink_active_list(unsigned long nr_to_scan, } } - if (page_referenced(page, 0, sc->target_mem_cgroup, + if (page_referenced(page, 0, cgroup_reclaim(sc), &vm_flags)) { nr_rotated += hpage_nr_pages(page); /* @@ -2625,7 +2626,7 @@ static inline bool should_continue_reclaim(struct pglist_data *pgdat, static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) { - struct mem_cgroup *target_memcg = sc->target_mem_cgroup; + struct mem_cgroup *target_memcg = cgroup_reclaim(sc); struct mem_cgroup *memcg; memcg = mem_cgroup_iter(target_memcg, NULL, NULL); @@ -2686,10 +2687,11 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) struct reclaim_state *reclaim_state = current->reclaim_state; unsigned long nr_reclaimed, nr_scanned; struct lruvec *target_lruvec; + struct mem_cgroup *target_memcg = cgroup_reclaim(sc); bool reclaimable = false; unsigned long file; - target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); + target_lruvec = mem_cgroup_lruvec(target_memcg, pgdat); again: memset(&sc->nr, 0, sizeof(sc->nr)); @@ -2744,7 +2746,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) * thrashing file LRU becomes infinitely more attractive than * anon pages. Try to detect this based on file LRU size. */ - if (!cgroup_reclaim(sc)) { + if (!target_memcg) { unsigned long total_high_wmark = 0; unsigned long free, anon; int z; @@ -2782,7 +2784,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) } /* Record the subtree's reclaim efficiency */ - vmpressure(sc->gfp_mask, sc->target_mem_cgroup, true, + vmpressure(sc->gfp_mask, target_memcg, true, sc->nr_scanned - nr_scanned, sc->nr_reclaimed - nr_reclaimed); @@ -2833,7 +2835,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) * stalling in wait_iff_congested(). */ if ((current_is_kswapd() || - (cgroup_reclaim(sc) && writeback_throttling_sane(sc))) && + (target_memcg && writeback_throttling_sane(sc))) && sc->nr.dirty && sc->nr.dirty == sc->nr.congested) set_bit(LRUVEC_CONGESTED, &target_lruvec->flags); @@ -3020,14 +3022,15 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist, pg_data_t *last_pgdat; struct zoneref *z; struct zone *zone; + struct mem_cgroup *target_memcg = cgroup_reclaim(sc); retry: delayacct_freepages_start(); - if (!cgroup_reclaim(sc)) + if (!target_memcg) __count_zid_vm_events(ALLOCSTALL, sc->reclaim_idx, 1); do { - vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup, + vmpressure_prio(sc->gfp_mask, target_memcg, sc->priority); sc->nr_scanned = 0; shrink_zones(zonelist, sc); @@ -3053,12 +3056,12 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist, continue; last_pgdat = zone->zone_pgdat; - snapshot_refaults(sc->target_mem_cgroup, zone->zone_pgdat); + snapshot_refaults(target_memcg, zone->zone_pgdat); - if (cgroup_reclaim(sc)) { + if (target_memcg) { struct lruvec *lruvec; - lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, + lruvec = mem_cgroup_lruvec(target_memcg, zone->zone_pgdat); clear_bit(LRUVEC_CONGESTED, &lruvec->flags); }