From patchwork Wed May 27 18:29:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 11573755 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A04AE739 for ; Wed, 27 May 2020 18:29:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6C20620890 for ; Wed, 27 May 2020 18:29:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TNW0I7VX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6C20620890 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8E68E8001A; Wed, 27 May 2020 14:29:29 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 899D380010; Wed, 27 May 2020 14:29:29 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 737768001A; Wed, 27 May 2020 14:29:29 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0007.hostedemail.com [216.40.44.7]) by kanga.kvack.org (Postfix) with ESMTP id 5CF1180010 for ; Wed, 27 May 2020 14:29:29 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 118728248D51 for ; Wed, 27 May 2020 18:29:29 +0000 (UTC) X-FDA: 76863336858.03.paint08_0e153f626d54 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id EC23528A4EE for ; Wed, 27 May 2020 18:29:28 +0000 (UTC) X-Spam-Summary: 2,0,0,4865c1ca3f99b99e,d41d8cd98f00b204,3hrhoxggkccepe7hbbi8dlldib.9ljifkru-jjhs79h.lod@flex--shakeelb.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1437:1516:1518:1535:1542:1593:1594:1711:1730:1747:1777:1792:2194:2199:2393:2559:2562:2895:3138:3139:3140:3141:3142:3152:3353:3865:3866:3867:3874:5007:6261:6653:7903:8957:9969:10004:10400:10450:10455:11026:11473:11658:11914:12043:12048:12296:12297:12438:12555:12895:14096:14097:14181:14394:14659:14721:19904:19999:21080:21444:21451:21627:21966:21990:30054:30070,0,RBL:209.85.210.202:@flex--shakeelb.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: paint08_0e153f626d54 X-Filterd-Recvd-Size: 5341 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Wed, 27 May 2020 18:29:27 +0000 (UTC) Received: by mail-pf1-f202.google.com with SMTP id b190so5289027pfb.9 for ; Wed, 27 May 2020 11:29:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=umlZmjF/xRJ/ByIszoMII6nD5KbkyV6mpjK0jZ/NaXE=; b=TNW0I7VXkAn0j2BLWEim22wstMZleW6Z39EFJ2CSnOOxlUqvHExjE3VD6l2REop6DH JNmHvRwF/gyW/TqMxOiIdiFim+3z2QSIRjoArucOgnQFFZsIhg01hmeTnBrS9n1mttvI kTvEW1YOf7PvHic1t2ooTL6U9y2xWgoRlspZbgW/LwoZDgXWL6kY5ckbovNbIhx7oru2 VGwe9wM9i6MGKAO8kfngkzHfHuzihBhKjo9eenekHs2qSrFJ6J4R2i0WXkR8SI7Hmthb z9zKpnmCB0G6B5XaB9rlX5KyP0+VF5pKd1G9JNmTkFa30dNY2goKO7NNMc6DRJMzx1co F1+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=umlZmjF/xRJ/ByIszoMII6nD5KbkyV6mpjK0jZ/NaXE=; b=QZmXlOu6IyO+RQQI1bp3oLAlB1etwDpdqyo+cABn1sdrDM8sNF/i/0TJaSdAy/UEre wfOLaQXZtdrMxBBDvVr23aLlOWq3MXxuHF7LTt31IY57kahe1PVbLmyGwrIBDS3BXewv sNwaszlhULIo1bZY9iS2MZs9FeCMsfSvzTCNCiQj0oRfxJmpNcZGKvFkIrUs6VISNrqp OeuJqCnwhxBpFDgv23RWQuaY/v7fnoL1vibxRbK7uFmwGEF4oztM2Tv6E220TSVX31aE VbvQW+HjBnyaxDhJ8T+l9xDWjQCWB4tpUEn4CRhuJcwuK+WyYBu8sLECv5vKtAn1rbXq TWwg== X-Gm-Message-State: AOAM532kQnXFKJz9IwEfhDj34VunlXGdJQoBXjz29ekAA1dgNVAy3kGG xpON6iihaXTVPht7buy1dzSXJO2hIpXX/g== X-Google-Smtp-Source: ABdhPJyN2FB/FWPyNcSELEB3nINQwo30vwWeEX0qSwroQp8Ua126Oy6z2xHpvU8JohPT3Gk/+oheDuMcri5w/A== X-Received: by 2002:a63:f305:: with SMTP id l5mr2461116pgh.387.1590604166120; Wed, 27 May 2020 11:29:26 -0700 (PDT) Date: Wed, 27 May 2020 11:29:14 -0700 Message-Id: <20200527182916.249910-1-shakeelb@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.27.0.rc0.183.gde8f92d652-goog Subject: [PATCH resend 1/3] mm: swap: fix vmstats for huge pages From: Shakeel Butt To: Mel Gorman , Johannes Weiner , Roman Gushchin , Michal Hocko Cc: Andrew Morton , Minchan Kim , Rik van Riel , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Shakeel Butt X-Rspamd-Queue-Id: EC23528A4EE X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Many of the callbacks called by pagevec_lru_move_fn() does not correctly update the vmstats for huge pages. Fix that. Also __pagevec_lru_add_fn() use the irq-unsafe alternative to update the stat as the irqs are already disabled. Signed-off-by: Shakeel Butt Acked-by: Johannes Weiner --- mm/swap.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index a37bd7b202ac..3dbef6517cac 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -225,7 +225,7 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec, del_page_from_lru_list(page, lruvec, page_lru(page)); ClearPageActive(page); add_page_to_lru_list_tail(page, lruvec, page_lru(page)); - (*pgmoved)++; + (*pgmoved) += hpage_nr_pages(page); } } @@ -285,7 +285,7 @@ static void __activate_page(struct page *page, struct lruvec *lruvec, add_page_to_lru_list(page, lruvec, lru); trace_mm_lru_activate(page); - __count_vm_event(PGACTIVATE); + __count_vm_events(PGACTIVATE, hpage_nr_pages(page)); update_page_reclaim_stat(lruvec, file, 1); } } @@ -503,6 +503,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, { int lru, file; bool active; + int nr_pages = hpage_nr_pages(page); if (!PageLRU(page)) return; @@ -536,11 +537,11 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, * We moves tha page into tail of inactive. */ add_page_to_lru_list_tail(page, lruvec, lru); - __count_vm_event(PGROTATED); + __count_vm_events(PGROTATED, nr_pages); } if (active) - __count_vm_event(PGDEACTIVATE); + __count_vm_events(PGDEACTIVATE, nr_pages); update_page_reclaim_stat(lruvec, file, 0); } @@ -929,6 +930,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, { enum lru_list lru; int was_unevictable = TestClearPageUnevictable(page); + int nr_pages = hpage_nr_pages(page); VM_BUG_ON_PAGE(PageLRU(page), page); @@ -966,13 +968,13 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, update_page_reclaim_stat(lruvec, page_is_file_lru(page), PageActive(page)); if (was_unevictable) - count_vm_event(UNEVICTABLE_PGRESCUED); + __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); } else { lru = LRU_UNEVICTABLE; ClearPageActive(page); SetPageUnevictable(page); if (!was_unevictable) - count_vm_event(UNEVICTABLE_PGCULLED); + __count_vm_events(UNEVICTABLE_PGCULLED, nr_pages); } add_page_to_lru_list(page, lruvec, lru);