From patchwork Wed Jun 3 23:02:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11586553 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8FEF992A for ; Wed, 3 Jun 2020 23:03:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5CB0F221F7 for ; Wed, 3 Jun 2020 23:03:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="tcnxKCEv" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5CB0F221F7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1AD2B280078; Wed, 3 Jun 2020 19:03:00 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 182D828006C; Wed, 3 Jun 2020 19:03:00 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 099AE280078; Wed, 3 Jun 2020 19:03:00 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0072.hostedemail.com [216.40.44.72]) by kanga.kvack.org (Postfix) with ESMTP id E4B8D28006C for ; Wed, 3 Jun 2020 19:02:59 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6A445181AEF0B for ; Wed, 3 Jun 2020 23:02:59 +0000 (UTC) X-FDA: 76889427678.30.ocean51_37257f43b4f4b Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id 4939C180B3C83 for ; Wed, 3 Jun 2020 23:02:59 +0000 (UTC) X-Spam-Summary: 30,2,0,f2c3a64d9fdb6fca,d41d8cd98f00b204,akpm@linux-foundation.org,,RULES_HIT:41:355:379:800:960:966:967:973:988:989:1260:1263:1345:1359:1381:1431:1437:1534:1543:1711:1730:1747:1777:1792:2196:2199:2393:2525:2559:2564:2682:2685:2740:2859:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4250:4321:4385:4432:4605:5007:6119:6261:6653:7576:7903:7974:8599:8957:9010:9025:9545:10010:10241:10913:11026:11658:11914:12043:12048:12297:12438:12517:12519:12555:12679:12783:12986:13153:13161:13228:13229:13846:14096:14181:14721:14849:21080:21325:21451:21627:21939:21990:30054,0,RBL:198.145.29.99:@linux-foundation.org:.lbl8.mailshell.net-64.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:2,LUA_SUMMARY:none X-HE-Tag: ocean51_37257f43b4f4b X-Filterd-Recvd-Size: 4687 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Jun 2020 23:02:58 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C0A07221E9; Wed, 3 Jun 2020 23:02:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1591225378; bh=gUDGzyXyOTfUUoXoHhccGOWJFyuOfz6wFi8hlVjtHkc=; h=Date:From:To:Subject:In-Reply-To:From; b=tcnxKCEvqtVmGwMk5VlB/LWBvCXYuBhBzc4nZCHfsIMf0FD0OfIAl8X0cBPWDbohM 3M3zyqI5BFCL4ItE+KA97pr7PWjX0lcUbwUExbYx7F9NNamEwqaZq7ztoCUZHQUo/l V67mIVJt/yd5mqXS6hUDVQF72gvijGT3IYcKQmAQ= Date: Wed, 03 Jun 2020 16:02:57 -0700 From: Andrew Morton To: akpm@linux-foundation.org, cai@lca.pw, hannes@cmpxchg.org, iamjoonsoo.kim@lge.com, linux-mm@kvack.org, mhocko@suse.com, minchan@kernel.org, mm-commits@vger.kernel.org, riel@surriel.com, torvalds@linux-foundation.org Subject: [patch 111/131] mm: deactivations shouldn't bias the LRU balance Message-ID: <20200603230257.5Jj_xOJV8%akpm@linux-foundation.org> In-Reply-To: <20200603155549.e041363450869eaae4c7f05b@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Rspamd-Queue-Id: 4939C180B3C83 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Johannes Weiner Subject: mm: deactivations shouldn't bias the LRU balance Operations like MADV_FREE, FADV_DONTNEED etc. currently move any affected active pages to the inactive list to accelerate their reclaim (good) but also steer page reclaim toward that LRU type, or away from the other (bad). The reason why this is undesirable is that such operations are not part of the regular page aging cycle, and rather a fluke that doesn't say much about the remaining pages on that list; they might all be in heavy use, and once the chunk of easy victims has been purged, the VM continues to apply elevated pressure on those remaining hot pages. The other LRU, meanwhile, might have easily reclaimable pages, and there was never a need to steer away from it in the first place. As the previous patch outlined, we should focus on recording actually observed cost to steer the balance rather than speculating about the potential value of one LRU list over the other. In that spirit, leave explicitely deactivated pages to the LRU algorithm to pick up, and let rotations decide which list is the easiest to reclaim. [cai@lca.pw: fix set-but-not-used warning] Link: http://lkml.kernel.org/r/20200522133335.GA624@Qians-MacBook-Air.local Link: http://lkml.kernel.org/r/20200520232525.798933-10-hannes@cmpxchg.org Signed-off-by: Johannes Weiner Acked-by: Minchan Kim Acked-by: Michal Hocko Cc: Joonsoo Kim Cc: Rik van Riel Cc: Qian Cai Signed-off-by: Andrew Morton --- mm/swap.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) --- a/mm/swap.c~mm-deactivations-shouldnt-bias-the-lru-balance +++ a/mm/swap.c @@ -498,7 +498,7 @@ void lru_cache_add_active_or_unevictable static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, void *arg) { - int lru, file; + int lru; bool active; if (!PageLRU(page)) @@ -512,7 +512,6 @@ static void lru_deactivate_file_fn(struc return; active = PageActive(page); - file = page_is_file_lru(page); lru = page_lru_base_type(page); del_page_from_lru_list(page, lruvec, lru + active); @@ -538,14 +537,12 @@ static void lru_deactivate_file_fn(struc if (active) __count_vm_event(PGDEACTIVATE); - lru_note_cost(lruvec, !file, hpage_nr_pages(page)); } static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, void *arg) { if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) { - int file = page_is_file_lru(page); int lru = page_lru_base_type(page); del_page_from_lru_list(page, lruvec, lru + LRU_ACTIVE); @@ -554,7 +551,6 @@ static void lru_deactivate_fn(struct pag add_page_to_lru_list(page, lruvec, lru); __count_vm_events(PGDEACTIVATE, hpage_nr_pages(page)); - lru_note_cost(lruvec, !file, hpage_nr_pages(page)); } } @@ -579,7 +575,6 @@ static void lru_lazyfree_fn(struct page __count_vm_events(PGLAZYFREE, hpage_nr_pages(page)); count_memcg_page_event(page, PGLAZYFREE); - lru_note_cost(lruvec, 0, hpage_nr_pages(page)); } }