From patchwork Wed May 27 18:29:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 11573755 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A04AE739 for ; Wed, 27 May 2020 18:29:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6C20620890 for ; Wed, 27 May 2020 18:29:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TNW0I7VX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6C20620890 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8E68E8001A; Wed, 27 May 2020 14:29:29 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 899D380010; Wed, 27 May 2020 14:29:29 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 737768001A; Wed, 27 May 2020 14:29:29 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0007.hostedemail.com [216.40.44.7]) by kanga.kvack.org (Postfix) with ESMTP id 5CF1180010 for ; Wed, 27 May 2020 14:29:29 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 118728248D51 for ; Wed, 27 May 2020 18:29:29 +0000 (UTC) X-FDA: 76863336858.03.paint08_0e153f626d54 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id EC23528A4EE for ; Wed, 27 May 2020 18:29:28 +0000 (UTC) X-Spam-Summary: 2,0,0,4865c1ca3f99b99e,d41d8cd98f00b204,3hrhoxggkccepe7hbbi8dlldib.9ljifkru-jjhs79h.lod@flex--shakeelb.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1437:1516:1518:1535:1542:1593:1594:1711:1730:1747:1777:1792:2194:2199:2393:2559:2562:2895:3138:3139:3140:3141:3142:3152:3353:3865:3866:3867:3874:5007:6261:6653:7903:8957:9969:10004:10400:10450:10455:11026:11473:11658:11914:12043:12048:12296:12297:12438:12555:12895:14096:14097:14181:14394:14659:14721:19904:19999:21080:21444:21451:21627:21966:21990:30054:30070,0,RBL:209.85.210.202:@flex--shakeelb.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: paint08_0e153f626d54 X-Filterd-Recvd-Size: 5341 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Wed, 27 May 2020 18:29:27 +0000 (UTC) Received: by mail-pf1-f202.google.com with SMTP id b190so5289027pfb.9 for ; Wed, 27 May 2020 11:29:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=umlZmjF/xRJ/ByIszoMII6nD5KbkyV6mpjK0jZ/NaXE=; b=TNW0I7VXkAn0j2BLWEim22wstMZleW6Z39EFJ2CSnOOxlUqvHExjE3VD6l2REop6DH JNmHvRwF/gyW/TqMxOiIdiFim+3z2QSIRjoArucOgnQFFZsIhg01hmeTnBrS9n1mttvI kTvEW1YOf7PvHic1t2ooTL6U9y2xWgoRlspZbgW/LwoZDgXWL6kY5ckbovNbIhx7oru2 VGwe9wM9i6MGKAO8kfngkzHfHuzihBhKjo9eenekHs2qSrFJ6J4R2i0WXkR8SI7Hmthb z9zKpnmCB0G6B5XaB9rlX5KyP0+VF5pKd1G9JNmTkFa30dNY2goKO7NNMc6DRJMzx1co F1+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=umlZmjF/xRJ/ByIszoMII6nD5KbkyV6mpjK0jZ/NaXE=; b=QZmXlOu6IyO+RQQI1bp3oLAlB1etwDpdqyo+cABn1sdrDM8sNF/i/0TJaSdAy/UEre wfOLaQXZtdrMxBBDvVr23aLlOWq3MXxuHF7LTt31IY57kahe1PVbLmyGwrIBDS3BXewv sNwaszlhULIo1bZY9iS2MZs9FeCMsfSvzTCNCiQj0oRfxJmpNcZGKvFkIrUs6VISNrqp OeuJqCnwhxBpFDgv23RWQuaY/v7fnoL1vibxRbK7uFmwGEF4oztM2Tv6E220TSVX31aE VbvQW+HjBnyaxDhJ8T+l9xDWjQCWB4tpUEn4CRhuJcwuK+WyYBu8sLECv5vKtAn1rbXq TWwg== X-Gm-Message-State: AOAM532kQnXFKJz9IwEfhDj34VunlXGdJQoBXjz29ekAA1dgNVAy3kGG xpON6iihaXTVPht7buy1dzSXJO2hIpXX/g== X-Google-Smtp-Source: ABdhPJyN2FB/FWPyNcSELEB3nINQwo30vwWeEX0qSwroQp8Ua126Oy6z2xHpvU8JohPT3Gk/+oheDuMcri5w/A== X-Received: by 2002:a63:f305:: with SMTP id l5mr2461116pgh.387.1590604166120; Wed, 27 May 2020 11:29:26 -0700 (PDT) Date: Wed, 27 May 2020 11:29:14 -0700 Message-Id: <20200527182916.249910-1-shakeelb@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.27.0.rc0.183.gde8f92d652-goog Subject: [PATCH resend 1/3] mm: swap: fix vmstats for huge pages From: Shakeel Butt To: Mel Gorman , Johannes Weiner , Roman Gushchin , Michal Hocko Cc: Andrew Morton , Minchan Kim , Rik van Riel , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Shakeel Butt X-Rspamd-Queue-Id: EC23528A4EE X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Many of the callbacks called by pagevec_lru_move_fn() does not correctly update the vmstats for huge pages. Fix that. Also __pagevec_lru_add_fn() use the irq-unsafe alternative to update the stat as the irqs are already disabled. Signed-off-by: Shakeel Butt Acked-by: Johannes Weiner --- mm/swap.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index a37bd7b202ac..3dbef6517cac 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -225,7 +225,7 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec, del_page_from_lru_list(page, lruvec, page_lru(page)); ClearPageActive(page); add_page_to_lru_list_tail(page, lruvec, page_lru(page)); - (*pgmoved)++; + (*pgmoved) += hpage_nr_pages(page); } } @@ -285,7 +285,7 @@ static void __activate_page(struct page *page, struct lruvec *lruvec, add_page_to_lru_list(page, lruvec, lru); trace_mm_lru_activate(page); - __count_vm_event(PGACTIVATE); + __count_vm_events(PGACTIVATE, hpage_nr_pages(page)); update_page_reclaim_stat(lruvec, file, 1); } } @@ -503,6 +503,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, { int lru, file; bool active; + int nr_pages = hpage_nr_pages(page); if (!PageLRU(page)) return; @@ -536,11 +537,11 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, * We moves tha page into tail of inactive. */ add_page_to_lru_list_tail(page, lruvec, lru); - __count_vm_event(PGROTATED); + __count_vm_events(PGROTATED, nr_pages); } if (active) - __count_vm_event(PGDEACTIVATE); + __count_vm_events(PGDEACTIVATE, nr_pages); update_page_reclaim_stat(lruvec, file, 0); } @@ -929,6 +930,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, { enum lru_list lru; int was_unevictable = TestClearPageUnevictable(page); + int nr_pages = hpage_nr_pages(page); VM_BUG_ON_PAGE(PageLRU(page), page); @@ -966,13 +968,13 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, update_page_reclaim_stat(lruvec, page_is_file_lru(page), PageActive(page)); if (was_unevictable) - count_vm_event(UNEVICTABLE_PGRESCUED); + __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); } else { lru = LRU_UNEVICTABLE; ClearPageActive(page); SetPageUnevictable(page); if (!was_unevictable) - count_vm_event(UNEVICTABLE_PGCULLED); + __count_vm_events(UNEVICTABLE_PGCULLED, nr_pages); } add_page_to_lru_list(page, lruvec, lru); From patchwork Wed May 27 18:29:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 11573757 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 85CA6739 for ; Wed, 27 May 2020 18:29:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4F76120899 for ; Wed, 27 May 2020 18:29:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aRqm/ceE" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4F76120899 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8BFD6800B6; Wed, 27 May 2020 14:29:54 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 871FA80010; Wed, 27 May 2020 14:29:54 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7AD52800B6; Wed, 27 May 2020 14:29:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0160.hostedemail.com [216.40.44.160]) by kanga.kvack.org (Postfix) with ESMTP id 649E780010 for ; Wed, 27 May 2020 14:29:54 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 1668E196C3B for ; Wed, 27 May 2020 18:29:54 +0000 (UTC) X-FDA: 76863337908.07.name20_1f0ee7426d54 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id E5D611803F9A8 for ; Wed, 27 May 2020 18:29:53 +0000 (UTC) X-Spam-Summary: 2,0,0,6fca0e4543a28736,d41d8cd98f00b204,3olhoxggkcdspexhbbiydlldib.zljifkru-jjhsxzh.lod@flex--shakeelb.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:966:973:988:989:1260:1277:1313:1314:1345:1437:1516:1518:1535:1543:1593:1594:1711:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:2895:3138:3139:3140:3141:3142:3152:3354:3865:3866:3867:3868:3871:3874:4250:4321:4385:4605:5007:6261:6653:8957:9969:10004:10400:10450:10455:11026:11473:11658:11914:12043:12048:12296:12297:12438:12555:12679:12895:13161:13229:13870:14096:14097:14181:14394:14659:14721:19904:19999:21080:21220:21444:21451:21627:21966:30054:30070,0,RBL:209.85.215.201:@flex--shakeelb.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: name20_1f0ee7426d54 X-Filterd-Recvd-Size: 5986 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Wed, 27 May 2020 18:29:53 +0000 (UTC) Received: by mail-pg1-f201.google.com with SMTP id 14so20142829pgm.3 for ; Wed, 27 May 2020 11:29:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=vLBHFyxDYq9Ns3YulLEQdnA3nE4XR0yQruh+9zYely8=; b=aRqm/ceEdrK7ztLIHW5n2QXxSGd4P1cmo1H98UyLtYRlbW83fAof/5IEwY56a8PCTU R97ROWa1whjZvyH+/ANH6vmk/8PQpO25d6JvFZ40KMzybbs682m+Bfmc95DCGsZSAKmr 7SRHmwg6G/GB5pojDtQV3GBj/N/wcsJQGDclK/AomZRsNdxgKU4qi5TfKXa4Rf5jGEZP vNjDriTMZFsASZic59rUUDjVTyPjyDZb7OR4aF9SoN0Snmzl1rpz8RA0sbDIyR/7H8Wk IZiMRrOKF/oSmeoPZJKY34xuM3kUb1oFMbF6VnU6ltRvM2p3BxqsUOEuapMZSav/7AK6 eDEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=vLBHFyxDYq9Ns3YulLEQdnA3nE4XR0yQruh+9zYely8=; b=oTtrewj8kvc+VgpCJNpS2FZ6RN/o++uSMtkbTBZPRfvh6RuYJecvT5jpMCFGlSyfhc iA3++84WQpohtFYpt13LOZ2Gc3gYZipWkeQnLqJ7Ilue8aGY0TJbQvQ2cRSFSA6fLSPT VGshVxXAjOj6qqe/rWDSx1X3vd7DLM7h6nGLrxkP5gVgcPcuu3tSmWImHPM3kaVFEhOa YFBg8A6X/H0CF5QwTlMxFF0emAIqQGo2RY9NJqaLjuk3Zs/UtPfZYC9k65N4unxsnC/P ST6GBJhzTG+HJ4FUsgE5bG4ctscfUubAESHhkeB9NZy5U8W6d8FB0a3MwUjO8Mr0c7nt IHoA== X-Gm-Message-State: AOAM530X3wujZBIvvw+Vrh/Fe/TVbVvxxMx4cRqHVluaIHS1wHQcXdUM WD6oy1Y5+hTnGT80as9XXDQtr9UDwtHOZA== X-Google-Smtp-Source: ABdhPJx9poQ4/3XGCJwa1gTm9TpLPhbUmHfAU2RkxLT2xzzGwLxdBq04/4zyrOtLRfIJPx6PaayXwULP0topdg== X-Received: by 2002:a17:90a:6546:: with SMTP id f6mr6349345pjs.55.1590604192056; Wed, 27 May 2020 11:29:52 -0700 (PDT) Date: Wed, 27 May 2020 11:29:47 -0700 Message-Id: <20200527182947.251343-1-shakeelb@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.27.0.rc0.183.gde8f92d652-goog Subject: [PATCH resend 2/3] mm: swap: memcg: fix memcg stats for huge pages From: Shakeel Butt To: Mel Gorman , Johannes Weiner , Roman Gushchin , Michal Hocko Cc: Andrew Morton , Minchan Kim , Rik van Riel , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Shakeel Butt X-Rspamd-Queue-Id: E5D611803F9A8 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The commit 2262185c5b28 ("mm: per-cgroup memory reclaim stats") added PGLAZYFREE, PGACTIVATE & PGDEACTIVATE stats for cgroups but missed couple of places and PGLAZYFREE missed huge page handling. Fix that. Also for PGLAZYFREE use the irq-unsafe function to update as the irq is already disabled. Fixes: 2262185c5b28 ("mm: per-cgroup memory reclaim stats") Signed-off-by: Shakeel Butt Acked-by: Johannes Weiner --- mm/swap.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index 3dbef6517cac..4eb179ee0b72 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -278,6 +278,7 @@ static void __activate_page(struct page *page, struct lruvec *lruvec, if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) { int file = page_is_file_lru(page); int lru = page_lru_base_type(page); + int nr_pages = hpage_nr_pages(page); del_page_from_lru_list(page, lruvec, lru); SetPageActive(page); @@ -285,7 +286,8 @@ static void __activate_page(struct page *page, struct lruvec *lruvec, add_page_to_lru_list(page, lruvec, lru); trace_mm_lru_activate(page); - __count_vm_events(PGACTIVATE, hpage_nr_pages(page)); + __count_vm_events(PGACTIVATE, nr_pages); + __count_memcg_events(lruvec_memcg(lruvec), PGACTIVATE, nr_pages); update_page_reclaim_stat(lruvec, file, 1); } } @@ -540,8 +542,10 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, __count_vm_events(PGROTATED, nr_pages); } - if (active) + if (active) { __count_vm_events(PGDEACTIVATE, nr_pages); + __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_pages); + } update_page_reclaim_stat(lruvec, file, 0); } @@ -551,13 +555,15 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) { int file = page_is_file_lru(page); int lru = page_lru_base_type(page); + int nr_pages = hpage_nr_pages(page); del_page_from_lru_list(page, lruvec, lru + LRU_ACTIVE); ClearPageActive(page); ClearPageReferenced(page); add_page_to_lru_list(page, lruvec, lru); - __count_vm_events(PGDEACTIVATE, hpage_nr_pages(page)); + __count_vm_events(PGDEACTIVATE, nr_pages); + __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_pages); update_page_reclaim_stat(lruvec, file, 0); } } @@ -568,6 +574,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) && !PageSwapCache(page) && !PageUnevictable(page)) { bool active = PageActive(page); + int nr_pages = hpage_nr_pages(page); del_page_from_lru_list(page, lruvec, LRU_INACTIVE_ANON + active); @@ -581,8 +588,8 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, ClearPageSwapBacked(page); add_page_to_lru_list(page, lruvec, LRU_INACTIVE_FILE); - __count_vm_events(PGLAZYFREE, hpage_nr_pages(page)); - count_memcg_page_event(page, PGLAZYFREE); + __count_vm_events(PGLAZYFREE, nr_pages); + __count_memcg_events(lruvec_memcg(lruvec), PGLAZYFREE, nr_pages); update_page_reclaim_stat(lruvec, 1, 0); } } From patchwork Wed May 27 18:29:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 11573759 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 27CFA1392 for ; Wed, 27 May 2020 18:30:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E75D620890 for ; Wed, 27 May 2020 18:30:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nrll2XWC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E75D620890 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2F500800B7; Wed, 27 May 2020 14:30:05 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2A4F480010; Wed, 27 May 2020 14:30:05 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 16AD4800B7; Wed, 27 May 2020 14:30:05 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0222.hostedemail.com [216.40.44.222]) by kanga.kvack.org (Postfix) with ESMTP id EEA9580010 for ; Wed, 27 May 2020 14:30:04 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A25B2181AC537 for ; Wed, 27 May 2020 18:30:04 +0000 (UTC) X-FDA: 76863338328.25.swing17_210c32d26d54 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id 7DF741804E3BB for ; Wed, 27 May 2020 18:30:04 +0000 (UTC) X-Spam-Summary: 2,0,0,3ef6d6e0dc6e6f5d,d41d8cd98f00b204,3qrhoxggkceuzohrllsinvvnsl.jvtspu14-ttr2hjr.vyn@flex--shakeelb.bounces.google.com,,RULES_HIT:41:69:152:355:379:541:800:960:966:968:973:988:989:1260:1277:1313:1314:1345:1437:1516:1518:1535:1543:1593:1594:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2890:3138:3139:3140:3141:3142:3152:3354:3865:3866:3868:3870:3871:3874:4042:4117:4250:4321:4385:4605:5007:6119:6261:6653:7576:7903:8957:9010:9592:9969:10004:10400:10450:10455:11026:11232:11473:11658:11914:12043:12048:12296:12297:12438:12555:12679:12895:12986:13161:13229:14096:14097:14181:14394:14659:14721:19904:19999:21080:21444:21450:21451:21627:21987:30001:30034:30054,0,RBL:209.85.210.201:@flex--shakeelb.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: swing17_210c32d26d54 X-Filterd-Recvd-Size: 6432 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Wed, 27 May 2020 18:30:03 +0000 (UTC) Received: by mail-pf1-f201.google.com with SMTP id w5so1766074pfb.17 for ; Wed, 27 May 2020 11:30:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=Itmi6eqCHr9uvU0alfk4nQ2taw2pdtsFaFOl7Jh0s5M=; b=nrll2XWCUSepzWcwbG/FSimgFfkZXbJjwr/+w59jtgR+KjTU6RL0pZMHypgM3IFE3S Zr6/UzLotOHtYDU4au41TaQbpzUuZ6GI9aDCobu5dk8GNHBlFWbS3wy5BhmxT3K5bR46 XHOJvLYl1uj1tT2fJLk6FBoDZ9rDFJ+CGctQdK6bnb/Nb+MeoWPElNifRCBb4bVZd4Gp S+JjXN8vUYJVydYSA7PfYqT1qzS+T/h3fgnINQ1guyfQYIKFBJAXGmaae/qKLlYtA3lf 1xQUw/plmnLxrXjyUr7XsiTXKilrlgdPbVkXXB0YE97gOZ62fMRviivn0lHRvVSOYdDA Gfgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=Itmi6eqCHr9uvU0alfk4nQ2taw2pdtsFaFOl7Jh0s5M=; b=aC07OdKZtLpxzjYWJkB91NcMEBvJ992LvV44EaBUPQ5bCK64KsjD3HTYJjEUtwdKxE kBEfZklXX7hUt97m/R3D09lRnh+78awde7wqS0v35wnim4bK9tho2QHZqham7mEVAJ8t kDIrOB9lIDuBw/bWNqcPuxd7u7jv+Floy2/uJfIGetVEnD+2tfDMZ71GQy/lq6FBjgDX 0r5myy+TlC8z+8N9TUVy4rprrpAGxbUUd/oyZUUI72OrjcGhK88+NBamCW/awXtEUnK2 eeQ6LH0uTrz1qHu82SmRhRYoSay8qpZmz/43Ppuc627kyJtFme6rGmwMMEO9Gbv5haMY kcMQ== X-Gm-Message-State: AOAM532+IeF2PFF0zUW/RPqpJLMZha3kiAuNrmWuUMnz+9HaNZkQqfgQ b2tNyJCbdgR9DIB06imiMps8Q2YCEXqVow== X-Google-Smtp-Source: ABdhPJxb+dE1b5FLu4Xh6+Iw2WjBon3ufZnDa8XSrtrtTj40i1Ffv9vDQWSpMlZ6r/OoQ6KSzNTGI9WU1PZXgQ== X-Received: by 2002:a65:66d5:: with SMTP id c21mr5132870pgw.155.1590604202631; Wed, 27 May 2020 11:30:02 -0700 (PDT) Date: Wed, 27 May 2020 11:29:58 -0700 Message-Id: <20200527182958.252402-1-shakeelb@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.27.0.rc0.183.gde8f92d652-goog Subject: [PATCH resend 3/3] mm: fix LRU balancing effect of new transparent huge pages From: Shakeel Butt To: Mel Gorman , Johannes Weiner , Roman Gushchin , Michal Hocko Cc: Andrew Morton , Minchan Kim , Rik van Riel , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Shakeel Butt X-Rspamd-Queue-Id: 7DF741804E3BB X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Johannes Weiner Currently, THP are counted as single pages until they are split right before being swapped out. However, at that point the VM is already in the middle of reclaim, and adjusting the LRU balance then is useless. Always account THP by the number of basepages, and remove the fixup from the splitting path. Signed-off-by: Johannes Weiner Signed-off-by: Shakeel Butt --- mm/swap.c | 23 +++++++++-------------- 1 file changed, 9 insertions(+), 14 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index 4eb179ee0b72..b75c0ce90418 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -262,14 +262,14 @@ void rotate_reclaimable_page(struct page *page) } } -static void update_page_reclaim_stat(struct lruvec *lruvec, - int file, int rotated) +static void update_page_reclaim_stat(struct lruvec *lruvec, int file, + int rotated, int nr_pages) { struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat; - reclaim_stat->recent_scanned[file]++; + reclaim_stat->recent_scanned[file] += nr_pages; if (rotated) - reclaim_stat->recent_rotated[file]++; + reclaim_stat->recent_rotated[file] += nr_pages; } static void __activate_page(struct page *page, struct lruvec *lruvec, @@ -288,7 +288,7 @@ static void __activate_page(struct page *page, struct lruvec *lruvec, __count_vm_events(PGACTIVATE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGACTIVATE, nr_pages); - update_page_reclaim_stat(lruvec, file, 1); + update_page_reclaim_stat(lruvec, file, 1, nr_pages); } } @@ -546,7 +546,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, __count_vm_events(PGDEACTIVATE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_pages); } - update_page_reclaim_stat(lruvec, file, 0); + update_page_reclaim_stat(lruvec, file, 0, nr_pages); } static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, @@ -564,7 +564,7 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, __count_vm_events(PGDEACTIVATE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_pages); - update_page_reclaim_stat(lruvec, file, 0); + update_page_reclaim_stat(lruvec, file, 0, nr_pages); } } @@ -590,7 +590,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, __count_vm_events(PGLAZYFREE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGLAZYFREE, nr_pages); - update_page_reclaim_stat(lruvec, 1, 0); + update_page_reclaim_stat(lruvec, 1, 0, nr_pages); } } @@ -899,8 +899,6 @@ EXPORT_SYMBOL(__pagevec_release); void lru_add_page_tail(struct page *page, struct page *page_tail, struct lruvec *lruvec, struct list_head *list) { - const int file = 0; - VM_BUG_ON_PAGE(!PageHead(page), page); VM_BUG_ON_PAGE(PageCompound(page_tail), page); VM_BUG_ON_PAGE(PageLRU(page_tail), page); @@ -926,9 +924,6 @@ void lru_add_page_tail(struct page *page, struct page *page_tail, add_page_to_lru_list_tail(page_tail, lruvec, page_lru(page_tail)); } - - if (!PageUnevictable(page)) - update_page_reclaim_stat(lruvec, file, PageActive(page_tail)); } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -973,7 +968,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, if (page_evictable(page)) { lru = page_lru(page); update_page_reclaim_stat(lruvec, page_is_file_lru(page), - PageActive(page)); + PageActive(page), nr_pages); if (was_unevictable) __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); } else {