From patchwork Wed May 20 23:25:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 11561713 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D38FE138A for ; Wed, 20 May 2020 23:26:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A12CC208DB for ; Wed, 20 May 2020 23:26:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=cmpxchg-org.20150623.gappssmtp.com header.i=@cmpxchg-org.20150623.gappssmtp.com header.b="Qi/qXzru" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A12CC208DB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B568580011; Wed, 20 May 2020 19:26:15 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AE2D88000A; Wed, 20 May 2020 19:26:15 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 983C680011; Wed, 20 May 2020 19:26:15 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0002.hostedemail.com [216.40.44.2]) by kanga.kvack.org (Postfix) with ESMTP id 7AD578000A for ; Wed, 20 May 2020 19:26:15 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 392F04DBF for ; Wed, 20 May 2020 23:26:15 +0000 (UTC) X-FDA: 76838683110.04.sofa45_1975f50ee2649 X-Spam-Summary: 30,2,0,9c51d7ad0b80ae21,d41d8cd98f00b204,hannes@cmpxchg.org,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1542:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2740:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:4250:4385:4432:5007:6119:6261:6653:7903:7974:8957:9010:10010:10241:11026:11658:11914:12043:12297:12438:12517:12519:12555:12895:13153:13161:13228:13229:13894:14096:14181:14394:14721:21080:21325:21444:21451:21627:21990:30054,0,RBL:209.85.222.194:@cmpxchg.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: sofa45_1975f50ee2649 X-Filterd-Recvd-Size: 5514 Received: from mail-qk1-f194.google.com (mail-qk1-f194.google.com [209.85.222.194]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Wed, 20 May 2020 23:26:14 +0000 (UTC) Received: by mail-qk1-f194.google.com with SMTP id y22so5521098qki.3 for ; Wed, 20 May 2020 16:26:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JD9BxGjYxOg6IYiFlF2HPLz+K3fq+w79j5nMBZBCwlM=; b=Qi/qXzruuxtJLTXqrhZFDKcKb5REKlP74g21aFN90SZ3bhM87tqSeN8rVQv6r7t1Jp ut9fAU8Db4oeQogAEkzCnYFRWK/Fy/RgT3GvUU03jI0bJbUbdhlbXZhd1FMvGRLLJLUk OuRrTMw8YmUqWOUue4k+R7BWWDRHlz1kjM2X2QV/PeoaVgYRQVTZhsqb2a1+kxU1B5tw FN6M4GOn/HpVF8XtR97Ne449r1UJ9oD9LLKkPOO7aOsgDwL2fHzuZIHi4/DN5/aFWpRz sQoJTlYcMFNce6uwQl+iactFlQ494ZgLPLmf5I3Fc0qMfA5+YFyjlXDcll0G3Pgv9w/n l8Fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JD9BxGjYxOg6IYiFlF2HPLz+K3fq+w79j5nMBZBCwlM=; b=DbLNjoEemycXkJr/XXjX4+0s4B5WEtu9MUXp9OUa/tH/bRggSj9ZUIknubFOw9Lwne 0kC/YQZK3W8YxUOp5dts/35Uz1AHRm2q2Cqfb0727wEbuWq5ltDi1ROpCkc2LuvzJ6hR UdG25w4xXGgCOuTxzdoJogmtSivj3tJ93JsNm/biaPWJYNKrwLGSiylLeDqSZszvcidw m9d3ldToieuNPZx+X5Kjij5/SC2bLCqCst5dt7yFOqc/Ue3i+9+pnCDmnPO7ebyf4Q4u qYCewFwQc+qrGe5DpElWbsUKOCDLn6mHJsYwcEUcfI8GeM+kyFn5QSIgW303Zz62l1r7 Zh+A== X-Gm-Message-State: AOAM530wbkHvx2Ccp3Rq7jQTgiJHWFjet8VA0hd8UJh8eTw9Bn1Yl9H5 53quWTJVXt5Z94rOaQEFDQNvfC750Vg= X-Google-Smtp-Source: ABdhPJzSL6c+84ozsdZGZDu3uGWyllg+dwn1/sWq5j9UJIWTeTeGQbmBH2yrr6HAHp7qlRYoVWHveA== X-Received: by 2002:a37:4c48:: with SMTP id z69mr6380214qka.138.1590017173866; Wed, 20 May 2020 16:26:13 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:4708]) by smtp.gmail.com with ESMTPSA id z65sm3372453qkc.91.2020.05.20.16.26.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 May 2020 16:26:13 -0700 (PDT) From: Johannes Weiner To: linux-mm@kvack.org Cc: Rik van Riel , Minchan Kim , Michal Hocko , Andrew Morton , Joonsoo Kim , linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 09/14] mm: deactivations shouldn't bias the LRU balance Date: Wed, 20 May 2020 19:25:20 -0400 Message-Id: <20200520232525.798933-10-hannes@cmpxchg.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200520232525.798933-1-hannes@cmpxchg.org> References: <20200520232525.798933-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Operations like MADV_FREE, FADV_DONTNEED etc. currently move any affected active pages to the inactive list to accelerate their reclaim (good) but also steer page reclaim toward that LRU type, or away from the other (bad). The reason why this is undesirable is that such operations are not part of the regular page aging cycle, and rather a fluke that doesn't say much about the remaining pages on that list; they might all be in heavy use, and once the chunk of easy victims has been purged, the VM continues to apply elevated pressure on those remaining hot pages. The other LRU, meanwhile, might have easily reclaimable pages, and there was never a need to steer away from it in the first place. As the previous patch outlined, we should focus on recording actually observed cost to steer the balance rather than speculating about the potential value of one LRU list over the other. In that spirit, leave explicitely deactivated pages to the LRU algorithm to pick up, and let rotations decide which list is the easiest to reclaim. Signed-off-by: Johannes Weiner Acked-by: Minchan Kim Acked-by: Michal Hocko --- mm/swap.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index 5d62c5a0c651..d7912bfb597f 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -515,14 +515,12 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, if (active) __count_vm_event(PGDEACTIVATE); - lru_note_cost(lruvec, !file, hpage_nr_pages(page)); } static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, void *arg) { if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) { - int file = page_is_file_lru(page); int lru = page_lru_base_type(page); del_page_from_lru_list(page, lruvec, lru + LRU_ACTIVE); @@ -531,7 +529,6 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, add_page_to_lru_list(page, lruvec, lru); __count_vm_events(PGDEACTIVATE, hpage_nr_pages(page)); - lru_note_cost(lruvec, !file, hpage_nr_pages(page)); } } @@ -556,7 +553,6 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, __count_vm_events(PGLAZYFREE, hpage_nr_pages(page)); count_memcg_page_event(page, PGLAZYFREE); - lru_note_cost(lruvec, 0, hpage_nr_pages(page)); } }