From patchwork Wed May 20 23:25:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 11561697 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 40706138A for ; Wed, 20 May 2020 23:26:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0D8CE20825 for ; Wed, 20 May 2020 23:26:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=cmpxchg-org.20150623.gappssmtp.com header.i=@cmpxchg-org.20150623.gappssmtp.com header.b="Lb7ndAEu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0D8CE20825 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 981B080009; Wed, 20 May 2020 19:26:03 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 93162900002; Wed, 20 May 2020 19:26:03 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7F73E80009; Wed, 20 May 2020 19:26:03 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0206.hostedemail.com [216.40.44.206]) by kanga.kvack.org (Postfix) with ESMTP id 5E2D2900002 for ; Wed, 20 May 2020 19:26:03 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 0D916181AEF1D for ; Wed, 20 May 2020 23:26:03 +0000 (UTC) X-FDA: 76838682606.18.screw27_17a52bb168f0a X-Spam-Summary: 2,0,0,1dcdb30c969d1c7f,d41d8cd98f00b204,hannes@cmpxchg.org,,RULES_HIT:41:69:355:379:541:800:960:966:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2890:3138:3139:3140:3141:3142:3354:3622:3865:3866:3868:3870:3871:3874:4042:4117:4250:4385:4605:5007:6119:6261:6653:7903:8957:9010:9592:10004:11026:11232:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12895:13161:13229:13894:14096:14181:14394:14721:21080:21444:21450:21451:21627:21987:30001:30034:30054,0,RBL:209.85.160.193:@cmpxchg.org:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:29,LUA_SUMMARY:none X-HE-Tag: screw27_17a52bb168f0a X-Filterd-Recvd-Size: 6797 Received: from mail-qt1-f193.google.com (mail-qt1-f193.google.com [209.85.160.193]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Wed, 20 May 2020 23:26:02 +0000 (UTC) Received: by mail-qt1-f193.google.com with SMTP id a23so4117751qto.1 for ; Wed, 20 May 2020 16:26:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2ozutM1RuMVFVDvSIK8bxM1gfp4d+rFh7jBPaadPhlY=; b=Lb7ndAEuNhjpm7qFW7cY/KXnvFygKYOdT4JswXczLPm909v/DQ8GR7bbm44g/X0spu p+/16sYWRdQx8Pm26HKQwjp4euw6hp95qZ8If2y+6Lox6HQZx2c+LIwq5Y0oPZxBzXs3 ZA3Arusf3P6B4eajc4K3WyFw2kA4iSUUg1kEJ96RgKwwEWmlVQ2wDIvpq6YM/ddUvEKn oEjySAahIKLRb9VG1+Twv7hYdcsJI4ETtJdTABdfTl7Opktcpi2wNjDAtfxC2xkALi/K o2svRWYCilFhKKJYzc7pz7+IYAeMn0+Gbgr0Txxt00hvGDR9FzKc8alzfLV2+uOXOopH 9JOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2ozutM1RuMVFVDvSIK8bxM1gfp4d+rFh7jBPaadPhlY=; b=ZIsmJ0jSYywqDGbY08BFjy6d6VgE8yJ7FP6IJWgRTztXh8zRoH9ihZBaAKAkck9jVk dgnc8xcMaxr4DUrzz91Y4c3E8AZ7OwU00JrsmRv1Xc1HqKGGkIrYeqlG74jS31m82d8o 4WiVqROMU3oqb3DiyUfalsjl6/8nF9WGdbdOfdVtwszq1iUV9+KBY+bEGV8m0TstJpgA XEGzeY97S9BGLmOe6debEN7prlwyqaLFFhL+qTN74uAIm/f4WFgvto7FqyNQp73Vdh9C 78kfhmPIjDwPcCZ21K+ZWej+aaFyL7VMDm12lwYKNKTuzAmHPSL9YYYxN6AQUENaRO9o q4iA== X-Gm-Message-State: AOAM533kZQyphtWtNnpW0UPKrCgWPx5uglAXwSssmWahANWeKnUerlVv j42ObVJL2n9zYaUjmwUu5/ufwEi5dHY= X-Google-Smtp-Source: ABdhPJwHDsn4V5BDMrhnRfBkMRRvO8wA+GSitS0IWz8FXvpMWlAjzbtWutB3L5QWvt/43zchkIBqSg== X-Received: by 2002:ac8:6615:: with SMTP id c21mr8107277qtp.185.1590017161371; Wed, 20 May 2020 16:26:01 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:4708]) by smtp.gmail.com with ESMTPSA id l6sm2544316qkc.59.2020.05.20.16.26.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 May 2020 16:26:00 -0700 (PDT) From: Johannes Weiner To: linux-mm@kvack.org Cc: Rik van Riel , Minchan Kim , Michal Hocko , Andrew Morton , Joonsoo Kim , linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 01/14] mm: fix LRU balancing effect of new transparent huge pages Date: Wed, 20 May 2020 19:25:12 -0400 Message-Id: <20200520232525.798933-2-hannes@cmpxchg.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200520232525.798933-1-hannes@cmpxchg.org> References: <20200520232525.798933-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, THP are counted as single pages until they are split right before being swapped out. However, at that point the VM is already in the middle of reclaim, and adjusting the LRU balance then is useless. Always account THP by the number of basepages, and remove the fixup from the splitting path. Signed-off-by: Johannes Weiner Reviewed-by: Rik van Riel Acked-by: Michal Hocko Acked-by: Minchan Kim Reviewed-by: Shakeel Butt --- mm/swap.c | 25 +++++++++++-------------- 1 file changed, 11 insertions(+), 14 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index bf9a79fed62d..68eae1e2787a 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -263,13 +263,14 @@ void rotate_reclaimable_page(struct page *page) } static void update_page_reclaim_stat(struct lruvec *lruvec, - int file, int rotated) + int file, int rotated, + unsigned int nr_pages) { struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat; - reclaim_stat->recent_scanned[file]++; + reclaim_stat->recent_scanned[file] += nr_pages; if (rotated) - reclaim_stat->recent_rotated[file]++; + reclaim_stat->recent_rotated[file] += nr_pages; } static void __activate_page(struct page *page, struct lruvec *lruvec, @@ -286,7 +287,7 @@ static void __activate_page(struct page *page, struct lruvec *lruvec, trace_mm_lru_activate(page); __count_vm_event(PGACTIVATE); - update_page_reclaim_stat(lruvec, file, 1); + update_page_reclaim_stat(lruvec, file, 1, hpage_nr_pages(page)); } } @@ -541,7 +542,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, if (active) __count_vm_event(PGDEACTIVATE); - update_page_reclaim_stat(lruvec, file, 0); + update_page_reclaim_stat(lruvec, file, 0, hpage_nr_pages(page)); } static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, @@ -557,7 +558,7 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, add_page_to_lru_list(page, lruvec, lru); __count_vm_events(PGDEACTIVATE, hpage_nr_pages(page)); - update_page_reclaim_stat(lruvec, file, 0); + update_page_reclaim_stat(lruvec, file, 0, hpage_nr_pages(page)); } } @@ -582,7 +583,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, __count_vm_events(PGLAZYFREE, hpage_nr_pages(page)); count_memcg_page_event(page, PGLAZYFREE); - update_page_reclaim_stat(lruvec, 1, 0); + update_page_reclaim_stat(lruvec, 1, 0, hpage_nr_pages(page)); } } @@ -890,8 +891,6 @@ EXPORT_SYMBOL(__pagevec_release); void lru_add_page_tail(struct page *page, struct page *page_tail, struct lruvec *lruvec, struct list_head *list) { - const int file = 0; - VM_BUG_ON_PAGE(!PageHead(page), page); VM_BUG_ON_PAGE(PageCompound(page_tail), page); VM_BUG_ON_PAGE(PageLRU(page_tail), page); @@ -917,9 +916,6 @@ void lru_add_page_tail(struct page *page, struct page *page_tail, add_page_to_lru_list_tail(page_tail, lruvec, page_lru(page_tail)); } - - if (!PageUnevictable(page)) - update_page_reclaim_stat(lruvec, file, PageActive(page_tail)); } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -962,8 +958,9 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, if (page_evictable(page)) { lru = page_lru(page); - update_page_reclaim_stat(lruvec, page_is_file_lru(page), - PageActive(page)); + update_page_reclaim_stat(lruvec, is_file_lru(lru), + PageActive(page), + hpage_nr_pages(page)); if (was_unevictable) count_vm_event(UNEVICTABLE_PGRESCUED); } else {