From patchwork Sat May 9 14:19:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 11538345 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 76E0E14C0 for ; Sat, 9 May 2020 14:21:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 34C8521655 for ; Sat, 9 May 2020 14:21:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="euBpYkLZ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 34C8521655 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4E2E3900008; Sat, 9 May 2020 10:21:05 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 46CCA8E0003; Sat, 9 May 2020 10:21:05 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 333F2900008; Sat, 9 May 2020 10:21:05 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0251.hostedemail.com [216.40.44.251]) by kanga.kvack.org (Postfix) with ESMTP id 15CA18E0003 for ; Sat, 9 May 2020 10:21:05 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id BED7B45A3 for ; Sat, 9 May 2020 14:21:04 +0000 (UTC) X-FDA: 76797392448.24.balls07_95dca43fb114 X-Spam-Summary: 2,0,0,3ecdea1e114a956e,d41d8cd98f00b204,3t7y2xggkcj4qf8iccj9emmejc.amkjglsv-kkit8ai.mpe@flex--shakeelb.bounces.google.com,,RULES_HIT:41:69:152:355:379:541:800:960:966:967:968:973:988:989:1260:1277:1313:1314:1345:1437:1516:1518:1535:1543:1593:1594:1711:1730:1747:1777:1792:2196:2199:2393:2525:2559:2563:2682:2685:2859:2890:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3152:3354:3865:3866:3868:3870:3871:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4042:4117:4250:4321:4385:4605:5007:6119:6261:6653:7576:7903:8957:9010:9025:9592:9969:10004:10400:10450:10455:11026:11232:11473:11658:11854:11914:12043:12048:12296:12297:12438:12555:12679:12895:12986:13161:13229:14096:14097:14181:14394:14659:14721:19904:19999:21080:21444:21450:21451:21627:21740:21987:30001:30034:30054,0,RBL:209.85.222.201:@flex--shakeelb.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF X-HE-Tag: balls07_95dca43fb114 X-Filterd-Recvd-Size: 6484 Received: from mail-qk1-f201.google.com (mail-qk1-f201.google.com [209.85.222.201]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Sat, 9 May 2020 14:21:04 +0000 (UTC) Received: by mail-qk1-f201.google.com with SMTP id l4so5276012qke.2 for ; Sat, 09 May 2020 07:21:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=Ho7QmA8ZMfJz5ce1k/QRjeTlDkjueHSq5xTJ1WkUGPU=; b=euBpYkLZT5GCedcber4H9h9dWGQVNueJcKLCWbi6PjcjCRaXvEO3FSv6oxRwwUAcRX 7GMUSbDU/RRorSdHyDULaGTDy/Isb9HnWNv7uSFLIfnAXzEpjWDDZ9ecaAwCn38NKEax 0gxXcwOHQME06+ZLRp6AKJRAprFG+BeMW0AMuJwFn4hm7d/fC18Mq2e2iKDvK5mUx0ky cX8PZeU1z8XhY6WWAroJsXpMPBuCNM/A3kiB8uNk78xyddWkFlb8dY2JcUKeYDVVxcxU 03HIujUpyoUQNXPgcwuBD82gGKi09HB9Dvm1DNqlyDFFcRnk9xuzcpKm6feAtg5c87Eu SAHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=Ho7QmA8ZMfJz5ce1k/QRjeTlDkjueHSq5xTJ1WkUGPU=; b=FcMNR2jStmIuAkP7xxgCrmfpFSjZ298GW+lGxL1+1xDHefk0LMhRIRY91AYhIwX1mr an3XQs4C69E3s7WAaIyQXta7NP+LKEPBoc85wMTqAKZkyvfj5JFiXbRacmTgpRgVKSjI +itAT+BKiutFBgqekjk/I3s/3ekN4abWSSPbbfog3s2+0hu6UqRAteMeQ0Lc5JFUolxq J/pZNmVskdSMh7T/YFjCdV/Di1DVKAHRS+OSj8LEnKEQ9lYiXKr2xe48aB/x9Dl7ezlj nyQtop3+rqVwBAU/wSUtOA3/oy1/abszGJl3UDNpbg2kUDo7cumQyksD2UAll1d0rdpS w0Ow== X-Gm-Message-State: AGi0PubWyHgKFVuvY4y5MCDfgjxycCwXxhpeqx8JtRHw1HBmOL6b45NW yQqevOvZRVlbQV8sNnMgsK9ymG4Moix0fA== X-Google-Smtp-Source: APiQypKGo21usv+r+RSAulbzUM6zBXWr78hmsLPYbUwiyIoIbcg45yG57O7MJ//V4xvZCqSUvL0pXA7osc0+UQ== X-Received: by 2002:a0c:f2d3:: with SMTP id c19mr7486154qvm.109.1589034063386; Sat, 09 May 2020 07:21:03 -0700 (PDT) Date: Sat, 9 May 2020 07:19:46 -0700 Message-Id: <20200509141946.158892-1-shakeelb@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.26.2.645.ge9eca65c58-goog Subject: [PATCH] mm: fix LRU balancing effect of new transparent huge pages From: Shakeel Butt To: Mel Gorman , Johannes Weiner , Roman Gushchin , Michal Hocko Cc: Andrew Morton , Minchan Kim , Rik van Riel , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Shakeel Butt X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Johannes Weiner Currently, THP are counted as single pages until they are split right before being swapped out. However, at that point the VM is already in the middle of reclaim, and adjusting the LRU balance then is useless. Always account THP by the number of basepages, and remove the fixup from the splitting path. Signed-off-by: Johannes Weiner Signed-off-by: Shakeel Butt --- Revived the patch from https://lore.kernel.org/patchwork/patch/685703/ mm/swap.c | 23 +++++++++-------------- 1 file changed, 9 insertions(+), 14 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index 4eb179ee0b72..b75c0ce90418 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -262,14 +262,14 @@ void rotate_reclaimable_page(struct page *page) } } -static void update_page_reclaim_stat(struct lruvec *lruvec, - int file, int rotated) +static void update_page_reclaim_stat(struct lruvec *lruvec, int file, + int rotated, int nr_pages) { struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat; - reclaim_stat->recent_scanned[file]++; + reclaim_stat->recent_scanned[file] += nr_pages; if (rotated) - reclaim_stat->recent_rotated[file]++; + reclaim_stat->recent_rotated[file] += nr_pages; } static void __activate_page(struct page *page, struct lruvec *lruvec, @@ -288,7 +288,7 @@ static void __activate_page(struct page *page, struct lruvec *lruvec, __count_vm_events(PGACTIVATE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGACTIVATE, nr_pages); - update_page_reclaim_stat(lruvec, file, 1); + update_page_reclaim_stat(lruvec, file, 1, nr_pages); } } @@ -546,7 +546,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, __count_vm_events(PGDEACTIVATE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_pages); } - update_page_reclaim_stat(lruvec, file, 0); + update_page_reclaim_stat(lruvec, file, 0, nr_pages); } static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, @@ -564,7 +564,7 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, __count_vm_events(PGDEACTIVATE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_pages); - update_page_reclaim_stat(lruvec, file, 0); + update_page_reclaim_stat(lruvec, file, 0, nr_pages); } } @@ -590,7 +590,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, __count_vm_events(PGLAZYFREE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGLAZYFREE, nr_pages); - update_page_reclaim_stat(lruvec, 1, 0); + update_page_reclaim_stat(lruvec, 1, 0, nr_pages); } } @@ -899,8 +899,6 @@ EXPORT_SYMBOL(__pagevec_release); void lru_add_page_tail(struct page *page, struct page *page_tail, struct lruvec *lruvec, struct list_head *list) { - const int file = 0; - VM_BUG_ON_PAGE(!PageHead(page), page); VM_BUG_ON_PAGE(PageCompound(page_tail), page); VM_BUG_ON_PAGE(PageLRU(page_tail), page); @@ -926,9 +924,6 @@ void lru_add_page_tail(struct page *page, struct page *page_tail, add_page_to_lru_list_tail(page_tail, lruvec, page_lru(page_tail)); } - - if (!PageUnevictable(page)) - update_page_reclaim_stat(lruvec, file, PageActive(page_tail)); } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -973,7 +968,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, if (page_evictable(page)) { lru = page_lru(page); update_page_reclaim_stat(lruvec, page_is_file_lru(page), - PageActive(page)); + PageActive(page), nr_pages); if (was_unevictable) __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); } else {