From patchwork Tue Oct 22 14:47:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 11204653 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EF0931515 for ; Tue, 22 Oct 2019 14:48:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BBCE721783 for ; Tue, 22 Oct 2019 14:48:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=cmpxchg-org.20150623.gappssmtp.com header.i=@cmpxchg-org.20150623.gappssmtp.com header.b="mxq29U7K" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BBCE721783 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 819086B0008; Tue, 22 Oct 2019 10:48:19 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7CDB36B000A; Tue, 22 Oct 2019 10:48:19 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 66DDC6B000C; Tue, 22 Oct 2019 10:48:19 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0239.hostedemail.com [216.40.44.239]) by kanga.kvack.org (Postfix) with ESMTP id 470D16B0008 for ; Tue, 22 Oct 2019 10:48:19 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 09D51180AD815 for ; Tue, 22 Oct 2019 14:48:19 +0000 (UTC) X-FDA: 76071701118.07.grade89_65acf5f669222 X-Spam-Summary: 2,0,0,71ee6cf8ed938565,d41d8cd98f00b204,hannes@cmpxchg.org,:akpm@linux-foundation.org:mhocko@suse.com::cgroups@vger.kernel.org:linux-kernel@vger.kernel.org:kernel-team@fb.com,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1541:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3871:3872:3874:4321:5007:6119:6261:6653:7903:9010:9592:10004:10241:11026:11473:11658:11914:12043:12114:12296:12297:12438:12517:12519:12555:12895:12986:13069:13161:13229:13311:13357:13894:14096:14181:14384:14394:14721:14819:21080:21212:21444:21451:21627:30054:30070,0,RBL:209.85.160.194:@cmpxchg.org:.lbl8.mailshell.net-62.14.0.100 66.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:28,LUA_SUMMARY:none X-HE-Tag: grade89_65acf5f669222 X-Filterd-Recvd-Size: 4986 Received: from mail-qt1-f194.google.com (mail-qt1-f194.google.com [209.85.160.194]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Tue, 22 Oct 2019 14:48:18 +0000 (UTC) Received: by mail-qt1-f194.google.com with SMTP id t8so9604072qtc.6 for ; Tue, 22 Oct 2019 07:48:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=q6d0e71c1U6ZW8RKPMSK80x36zGerojsdp9LXPgfjJI=; b=mxq29U7KFf8nIL2EPYKGYZG3PSPLfnfyyUP3N9kMMBJep9Noe7BqgaM7RFDkI//avH +lWfp9sEakhlXGJS7YoZspH/bCa783BlFJaZLulfx06SkaVO6nseVrPLlE02uM46F5BT aXX+S5xo0gm2UsAhO2A+VmjIUQsL1r/v4PvaaiE2y1ld5cy0y2xidgDz/jYugGswM5YV XXV8qKlJgxAtFWHtk1MlF7jqqYsu3cW2zhBUYJ9MJyMxm8kVj4CL23AeGxB9bcLq1Cws cSqoKgFqCMpFHjtqd2m7RkiZsKaft0d6fZFpH0OP4v5ikJ/CxQXmt9q7gy7W0QOWLeLy G68g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=q6d0e71c1U6ZW8RKPMSK80x36zGerojsdp9LXPgfjJI=; b=qTo3NatHOozaj684oTeHx6znaflIkC8yDSevwZSufGWU00uObyD9Vt8H//uLNDM5JI Mh6yQKomcS/QupdJIsrW/DEcv7QhsTnZVXF6z22n2zKFHJB9/7K9GnPA/D3kYvNKXKUB nlNV8Sm4VGqI3iv30bgsNdBjinWHNr74H/07x5xWhZpl8XGJavO56uE5RQIa3u4pxhj6 /GNzr2mKi17QNdPfP/n6o++yOm70JD+ryIPru2xpd6n8JWFNFzCmDN4rFF/fQXGP2hPB OTirW7TXVUNsi3aIQDNaedjiw34EUnQ+6RJ67ExG4w1P0HStCWuUU4uXf8QaAquiMnmy JYjA== X-Gm-Message-State: APjAAAXpvEA58pgpo7FrN4tzm4SckENzFWed4o39TXDE/s531kkWYtg0 k//VsptcAiTyZRVcrD0ZuZSoIA== X-Google-Smtp-Source: APXvYqxLvuowjOIIcKYn75H2RxPI6SfUMcvB5TDKG/Hu0VfyuATYs1IGLTNo2b4pr6S1lKK5e7BkVg== X-Received: by 2002:ac8:6146:: with SMTP id d6mr3741857qtm.271.1571755697749; Tue, 22 Oct 2019 07:48:17 -0700 (PDT) Received: from localhost ([2620:10d:c091:500::3:10ad]) by smtp.gmail.com with ESMTPSA id p4sm10518531qkf.112.2019.10.22.07.48.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Oct 2019 07:48:17 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Michal Hocko , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 3/8] mm: vmscan: move inactive_list_is_low() swap check to the caller Date: Tue, 22 Oct 2019 10:47:58 -0400 Message-Id: <20191022144803.302233-4-hannes@cmpxchg.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191022144803.302233-1-hannes@cmpxchg.org> References: <20191022144803.302233-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: inactive_list_is_low() should be about one thing: checking the ratio between inactive and active list. Kitchensink checks like the one for swap space makes the function hard to use and modify its callsites. Luckly, most callers already have an understanding of the swap situation, so it's easy to clean up. get_scan_count() has its own, memcg-aware swap check, and doesn't even get to the inactive_list_is_low() check on the anon list when there is no swap space available. shrink_list() is called on the results of get_scan_count(), so that check is redundant too. age_active_anon() has its own totalswap_pages check right before it checks the list proportions. The shrink_node_memcg() site is the only one that doesn't do its own swap check. Add it there. Then delete the swap check from inactive_list_is_low(). Signed-off-by: Johannes Weiner Reviewed-by: Roman Gushchin Acked-by: Michal Hocko --- mm/vmscan.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index be3c22c274c1..622b77488144 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2226,13 +2226,6 @@ static bool inactive_list_is_low(struct lruvec *lruvec, bool file, unsigned long refaults; unsigned long gb; - /* - * If we don't have swap space, anonymous page deactivation - * is pointless. - */ - if (!file && !total_swap_pages) - return false; - inactive = lruvec_lru_size(lruvec, inactive_lru, sc->reclaim_idx); active = lruvec_lru_size(lruvec, active_lru, sc->reclaim_idx); @@ -2653,7 +2646,7 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc * Even if we did not try to evict anon pages at all, we want to * rebalance the anon lru active/inactive ratio. */ - if (inactive_list_is_low(lruvec, false, sc, true)) + if (total_swap_pages && inactive_list_is_low(lruvec, false, sc, true)) shrink_active_list(SWAP_CLUSTER_MAX, lruvec, sc, LRU_ACTIVE_ANON); }