From patchwork Mon Mar 23 05:52:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11452389 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 535B01731 for ; Mon, 23 Mar 2020 05:53:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 097DF20719 for ; Mon, 23 Mar 2020 05:52:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="U3LdV5wX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 097DF20719 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DEE196B0010; Mon, 23 Mar 2020 01:52:58 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D9FCB6B0032; Mon, 23 Mar 2020 01:52:58 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C8E386B0036; Mon, 23 Mar 2020 01:52:58 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0055.hostedemail.com [216.40.44.55]) by kanga.kvack.org (Postfix) with ESMTP id B0EEB6B0010 for ; Mon, 23 Mar 2020 01:52:58 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id F0628181AC9BF for ; Mon, 23 Mar 2020 05:52:58 +0000 (UTC) X-FDA: 76625558436.09.river09_714f800021b58 X-Spam-Summary: 2,0,0,2b83160bcac2972c,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1535:1543:1711:1730:1747:1777:1792:2393:2559:2562:2693:2897:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3874:4117:4250:4321:5007:6261:6653:7576:8957:9413:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12679:12895:12986:13161:13221:13229:14096:14181:14394:14721:21080:21444:21451:21611:21627:21666:21740:21990:30054:30064,0,RBL:209.85.215.196:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: river09_714f800021b58 X-Filterd-Recvd-Size: 6855 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Mon, 23 Mar 2020 05:52:58 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id k191so5489792pgc.13 for ; Sun, 22 Mar 2020 22:52:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=pCvoaUjV6nhAY7DdWWanlLnPt8h9gGv1CiTNrzSMHq0=; b=U3LdV5wXCT0qE7Tlc2w3UL/P7Be1OjppvOJPQr3URLcp3s6maoXt6VSAAbwJlrXJOP vArNITiv22DoQoVuhp8LFfT7tGbfjOZG1x2/05hW1saggT8/72KAzPFK7vCrGQeHCU5/ IY/+Gy1HerPmFA7wkK9iCGcLCM3aGWVnxQQrwAY+gzB5qatZ6MqsDXmAm3edCee5Ek+H cckhgF18DhntoRBIqQayklQ5gkS1QBJodSSkSBS1kwoX0AxVF5ZVNWFWd7mE9dp8XJgk 1ae5WE+JxH15n5l/hYk0GYkOu9zfqZoEfK95dFZUhbZYWnrleNl2tMROtZURMGtr4ivV HSxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pCvoaUjV6nhAY7DdWWanlLnPt8h9gGv1CiTNrzSMHq0=; b=FU2q8KVsdWWilCsugK09PhCS3uoh8xsll04rwjmty0IAF/zXMHU2qttGjk6qNtikTM fbHMLlZwNiPNAI19mVpn5V1L30lZMyN8/obCY4ZYoWsygMkhOuvCLHS4OL4nwjOvG0Zb QXqnNrU9ccAKbbF6h4ijyI4m/R4O9utfHLu/g8a1snFbAHGUzZUV6yqRBKbPAbWQwo3v JxV76OMnZek2KvN39Yv1faaj3XggExWslwkkEo8xKLSfEaMjIpmdpQ+6zNRkWdS/6EiF m0BgFnKpA6JgMQcACTD8TK808ddrUSEiVXHxiObzuFwxEGcSf+DBYdehoo450kBxhpMA NtuA== X-Gm-Message-State: ANhLgQ3pCBuJ1jM48U2MQI8mSiPI91xDBOu4al/3RYeYgo7bCWR2BGHH 5qxyZRlRRdftF4eo+VMolcE= X-Google-Smtp-Source: ADFU+vvR5bvOnRT/IaFbBHPKjbjcgVA0kWVQtjZDJWkZb1ag3VSrIrWPM8cPoqRlRUpuGyyoheeSyQ== X-Received: by 2002:a63:455:: with SMTP id 82mr19485470pge.197.1584942777121; Sun, 22 Mar 2020 22:52:57 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id y30sm12563058pff.67.2020.03.22.22.52.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 22 Mar 2020 22:52:56 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Hugh Dickins , Minchan Kim , Vlastimil Babka , Mel Gorman , kernel-team@lge.com, Joonsoo Kim Subject: [PATCH v4 8/8] mm/swap: count a new anonymous page as a reclaim_state's rotate Date: Mon, 23 Mar 2020 14:52:12 +0900 Message-Id: <1584942732-2184-9-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1584942732-2184-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1584942732-2184-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim reclaim_stat's rotate is used for controlling the ratio of scanning page between file and anonymous LRU. All new anonymous pages are counted for rotate before the patch, protecting anonymous pages on active LRU, and, it makes that reclaim on anonymous LRU is less happened than file LRU. Now, situation is changed. all new anonymous pages are not added to the active LRU so rotate would be far less than before. It will cause that reclaim on anonymous LRU happens more and it would result in bad effect on some system that is optimized for previous setting. Therefore, this patch counts a new anonymous page as a reclaim_state's rotate. Although it is non-logical to add this count to the reclaim_state's rotate in current algorithm, reducing the regression would be more important. I found this regression on kernel-build test and it is roughly 2~5% performance degradation. With this workaround, performance is completely restored. v2: fix a bug that reuses the rotate value for previous page Reported-by: kernel test robot Signed-off-by: Joonsoo Kim --- mm/swap.c | 29 ++++++++++++++++++++++++++++- 1 file changed, 28 insertions(+), 1 deletion(-) diff --git a/mm/swap.c b/mm/swap.c index 442d27e..1f19301 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -187,6 +187,9 @@ int get_kernel_page(unsigned long start, int write, struct page **pages) } EXPORT_SYMBOL_GPL(get_kernel_page); +static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, + void *arg); + static void pagevec_lru_move_fn(struct pagevec *pvec, void (*move_fn)(struct page *page, struct lruvec *lruvec, void *arg), void *arg) @@ -199,6 +202,7 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, for (i = 0; i < pagevec_count(pvec); i++) { struct page *page = pvec->pages[i]; struct pglist_data *pagepgdat = page_pgdat(page); + void *arg_orig = arg; if (pagepgdat != pgdat) { if (pgdat) @@ -207,8 +211,22 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, spin_lock_irqsave(&pgdat->lru_lock, flags); } + if (move_fn == __pagevec_lru_add_fn) { + struct list_head *entry = &page->lru; + unsigned long next = (unsigned long)entry->next; + unsigned long rotate = next & 2; + + if (rotate) { + VM_BUG_ON(arg); + + next = next & ~2; + entry->next = (struct list_head *)next; + arg = (void *)rotate; + } + } lruvec = mem_cgroup_page_lruvec(page, pgdat); (*move_fn)(page, lruvec, arg); + arg = arg_orig; } if (pgdat) spin_unlock_irqrestore(&pgdat->lru_lock, flags); @@ -475,6 +493,14 @@ void lru_cache_add_inactive_or_unevictable(struct page *page, hpage_nr_pages(page)); count_vm_event(UNEVICTABLE_PGMLOCKED); } + + if (PageSwapBacked(page) && !unevictable) { + struct list_head *entry = &page->lru; + unsigned long next = (unsigned long)entry->next; + + next = next | 2; + entry->next = (struct list_head *)next; + } lru_cache_add(page); } @@ -927,6 +953,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, { enum lru_list lru; int was_unevictable = TestClearPageUnevictable(page); + unsigned long rotate = (unsigned long)arg; VM_BUG_ON_PAGE(PageLRU(page), page); @@ -962,7 +989,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, if (page_evictable(page)) { lru = page_lru(page); update_page_reclaim_stat(lruvec, page_is_file_cache(page), - PageActive(page)); + PageActive(page) | rotate); if (was_unevictable) count_vm_event(UNEVICTABLE_PGRESCUED); } else {