From patchwork Tue Feb 11 06:19:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11374861 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CB457139A for ; Tue, 11 Feb 2020 06:20:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 93C3A20714 for ; Tue, 11 Feb 2020 06:20:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="WhfqMBoT" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 93C3A20714 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8E9E26B0289; Tue, 11 Feb 2020 01:20:14 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 84A426B028A; Tue, 11 Feb 2020 01:20:14 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 75EBD6B028B; Tue, 11 Feb 2020 01:20:14 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0089.hostedemail.com [216.40.44.89]) by kanga.kvack.org (Postfix) with ESMTP id 5AF0C6B028A for ; Tue, 11 Feb 2020 01:20:14 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 16BAC1867 for ; Tue, 11 Feb 2020 06:20:14 +0000 (UTC) X-FDA: 76476846348.07.bird67_1fbe9a73204a X-Spam-Summary: 2,0,0,ec7cac039965ea19,d41d8cd98f00b204,js1304@gmail.com,:akpm@linux-foundation.org::linux-kernel@vger.kernel.org:hannes@cmpxchg.org:mhocko@kernel.org:hughd@google.com:minchan@kernel.org:vbabka@suse.cz:mgorman@techsingularity.net:kernel-team@lge.com:iamjoonsoo.kim@lge.com,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1534:1541:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2693:2731:2897:3138:3139:3140:3141:3142:3352:3865:3866:3867:3868:3870:3871:3872:3874:4250:5007:6261:6653:7576:7875:9010:9036:9413:10004:11026:11473:11658:11914:12296:12297:12517:12519:12555:12679:12895:13069:13141:13230:13311:13357:14096:14181:14384:14394:14721:21080:21444:21451:21627:21666:21740:30054:30069,0,RBL:209.85.214.196:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:30,LUA_SUMMARY:none X-HE-Tag: bird67_1fbe9a73204a X-Filterd-Recvd-Size: 4447 Received: from mail-pl1-f196.google.com (mail-pl1-f196.google.com [209.85.214.196]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Feb 2020 06:20:13 +0000 (UTC) Received: by mail-pl1-f196.google.com with SMTP id p11so3827572plq.10 for ; Mon, 10 Feb 2020 22:20:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=PQRul9eP+ZS9HAAeaH8frtjB2cUDoTj2pwxbGJPwV4M=; b=WhfqMBoTUlYB/HyZ+Nhd4PmIdY1pn2SPX1Gy0W/IKW80dzDui0tPjuRHOQ2DTIDRJZ 6aV4r1RABqEYQm7BtBQKNBGPEv4aprSda3V6UEjL12xo8MKxiJmPy13jsJ6OlJ7883Lf MjaaWe5zDuhyLjiaoKeZwNwJ4PHrVOzydaiXTIO39I+7URfHAj6u5C0BugnaNI8wr1k2 cc9K5AB1ZUNFxwdNjNcAGL9Nv2b3fRpJ6gaYa5SnkvhST34Ab0WgpbZuNkbAISP79r5k Z5Yp19ytLB3WxaEW8h3LL9bZwOXJKu51XJMzGxmwJYrJwB1t+pBztME4yeCU5uJr2R7o Kw1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=PQRul9eP+ZS9HAAeaH8frtjB2cUDoTj2pwxbGJPwV4M=; b=VpwE5aFKaWSEy6YXnAh+afNlQvmCqc9CDAJ/Kkcl2Utp6kBsKl6KfIHFa8FVXadTKh R70lbFE+iMrWvjeGkECFiW+dJqra+huD81Yu0ZrGU0D01H9+vh7K3vOUyDJNnx4xLCcn Ve+RG2DO/55gQWLBUkRPdpYcQ02ndISD3G5Dl/eizB+fN+dpi6IhxmsSDrqONcq3jfhS MNTIbW3lJaGJxwVMVwuJvV5v4gWaaGXwMcv/N1W8lbnbb98JpgaPwWaYUULfZaex+efj VWB0Prf2eRTU/Ak8t8XrzSlsXpWIxNq/5RRD0B0cN8YKlJx1SMyTlJKDiE49Nglc5MLr WJ1Q== X-Gm-Message-State: APjAAAUh2DdLlskOVJ8jCO1H2jSRmd/DGwkfL+20v0gPN5Lg2TmZReC5 tGrOp1RVhxnYaMem8VESp+c= X-Google-Smtp-Source: APXvYqzCG4W6Wny2VO03PW0GnJRMgu4X9J6iTBt5Ta02//1FH+AW5Tbp0YbAEhdgZcS8fiECwRR0Uw== X-Received: by 2002:a17:902:524:: with SMTP id 33mr17015509plf.241.1581402012575; Mon, 10 Feb 2020 22:20:12 -0800 (PST) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id x197sm2578696pfc.1.2020.02.10.22.20.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 10 Feb 2020 22:20:12 -0800 (PST) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Hugh Dickins , Minchan Kim , Vlastimil Babka , Mel Gorman , kernel-team@lge.com, Joonsoo Kim Subject: [PATCH 1/9] mm/vmscan: make active/inactive ratio as 1:1 for anon lru Date: Tue, 11 Feb 2020 15:19:45 +0900 Message-Id: <1581401993-20041-2-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1581401993-20041-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1581401993-20041-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim Current implementation of LRU management for anonymous page has some problems. Most important one is that it doesn't protect the workingset, that is, pages on the active LRU list. Although, this problem will be fixed in the following patchset, the preparation is required and this patch does it. What following patchset does is to restore workingset protection. In this case, newly created or swap-in pages are started their lifetime on the inactive list. If inactive list is too small, there is not enough chance to be referenced and the page cannot become the workingset. In order to provide enough chance to the newly anonymous pages, this patch makes active/inactive LRU ratio as 1:1. Signed-off-by: Joonsoo Kim --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 572fb17..e772f3f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2217,7 +2217,7 @@ static bool inactive_is_low(struct lruvec *lruvec, enum lru_list inactive_lru) active = lruvec_page_state(lruvec, NR_LRU_BASE + active_lru); gb = (inactive + active) >> (30 - PAGE_SHIFT); - if (gb) + if (gb && is_file_lru(inactive_lru)) inactive_ratio = int_sqrt(10 * gb); else inactive_ratio = 1; From patchwork Tue Feb 11 06:19:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11374863 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E0B59924 for ; Tue, 11 Feb 2020 06:20:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 92AA52082F for ; Tue, 11 Feb 2020 06:20:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="W4S9865K" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 92AA52082F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 873AF6B028A; Tue, 11 Feb 2020 01:20:18 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 822636B028B; Tue, 11 Feb 2020 01:20:18 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 675496B028C; Tue, 11 Feb 2020 01:20:18 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0129.hostedemail.com [216.40.44.129]) by kanga.kvack.org (Postfix) with ESMTP id 3E4066B028A for ; Tue, 11 Feb 2020 01:20:18 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 035FF181AEF00 for ; Tue, 11 Feb 2020 06:20:18 +0000 (UTC) X-FDA: 76476846516.17.berry37_284253802f5b X-Spam-Summary: 2,0,0,51cef91184f21c55,d41d8cd98f00b204,js1304@gmail.com,:akpm@linux-foundation.org::linux-kernel@vger.kernel.org:hannes@cmpxchg.org:mhocko@kernel.org:hughd@google.com:minchan@kernel.org:vbabka@suse.cz:mgorman@techsingularity.net:kernel-team@lge.com:iamjoonsoo.kim@lge.com,RULES_HIT:1:41:355:379:541:800:960:966:973:988:989:1260:1345:1359:1437:1605:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:2637:2693:2901:2904:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3873:3874:4250:4321:4385:4605:5007:6119:6261:6653:7576:7903:8603:8660:8957:9010:9413:9707:10004:11026:11232:11473:11657:11658:11914:12043:12050:12296:12297:12438:12517:12519:12555:12679:12895:12986:13148:13153:13161:13172:13180:13228:13229:13230:14096:14394:21080:21433:21444:21451:21627:21666:21740:21990:30003:30034:30054:30079,0,RBL:209.85.216.67:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp, MSBL:0,D X-HE-Tag: berry37_284253802f5b X-Filterd-Recvd-Size: 14788 Received: from mail-pj1-f67.google.com (mail-pj1-f67.google.com [209.85.216.67]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Feb 2020 06:20:17 +0000 (UTC) Received: by mail-pj1-f67.google.com with SMTP id ep11so840211pjb.2 for ; Mon, 10 Feb 2020 22:20:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=xbtzK9TgYtKyZe35RGR44iCEihpjZnwYXYte8gva3RY=; b=W4S9865KVmgjt55gCgJg60wGXL8cuSLc0NHM+kEFYgHn1mZLMS+xps2enciKq69eB5 cVQw/74z9bLQADzZSHqn2UWTuu6CQPHEvUqS6Dqc2+n1BluwUFA0VmaI5S4GxQcrT5Qy +ORATbE5SCpHC+ctV6KWw96IkSVK+oElNafXUy192TZAUDePO9r2Vw8HQGTIaEMlZxkQ cMfHV574eyrdtsLBRlAA6VRpJjs5MrYzNx6rKcfN1QHue0nb2NyQeFxfb/DCMK0hzbds VDHPZv8F9i3iEsX34vDKphDnIfuPGHZr2PIP/WKCd+QYE40pXtSKUQSioIx0dGA4EiVe S9RQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xbtzK9TgYtKyZe35RGR44iCEihpjZnwYXYte8gva3RY=; b=a7gtlyMj0juzJbczT5+MFvvBQN0I/P/P+lgTcDgKMQHZrvS5MW1n4itYZFvh7mihdF 6QGMtMcAgbstAtp2uiV1LzBa34O7ck4RYdYuthex7/HgY1JaWJqKfOJQwYU02TBtXqYQ vOCZG+kSeMtb7wBITgjpe15nIJT5MoAIqdLz/Tcks19hhO7/7ECbfiJdoMR8AlvdJ1Sh oQiX7iDp5tItcjbR4w+EhDNvRTRALYLfYZcMbF062fGq3JEfP7/2LvryaBnmag/T7U0Q Zit7w4/OaXJXe6+nr0cMBywSbjByXkiPN/7xenDAKo5GdH12ebnP33tFwI/LQ+8IqtDw zHZw== X-Gm-Message-State: APjAAAXn/4ASO0hcKCf+FDH0bu4KAhSLg6NrWb27DFegmcX/23YAOpK4 gHztREUSmsy4XtJL0l+9fWI= X-Google-Smtp-Source: APXvYqxr79ACrSgDNTSAvFn/NFDWj+0vjvKkEfNIb1X2pPn/fL2oU3kIk7JLxUkw7q9UXn3MSfC8Fw== X-Received: by 2002:a17:90a:2004:: with SMTP id n4mr3395871pjc.20.1581402015850; Mon, 10 Feb 2020 22:20:15 -0800 (PST) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id x197sm2578696pfc.1.2020.02.10.22.20.12 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 10 Feb 2020 22:20:15 -0800 (PST) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Hugh Dickins , Minchan Kim , Vlastimil Babka , Mel Gorman , kernel-team@lge.com, Joonsoo Kim Subject: [PATCH 2/9] mm/vmscan: protect the workingset on anonymous LRU Date: Tue, 11 Feb 2020 15:19:46 +0900 Message-Id: <1581401993-20041-3-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1581401993-20041-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1581401993-20041-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim In current implementation, newly created or swap-in anonymous page is started on active list. Growing active list results in rebalancing active/inactive list so old pages on active list are demoted to inactive list. Hence, the page on active list isn't protected at all. Following is an example of this situation. Assume that 50 hot pages on active list. Numbers denote the number of pages on active/inactive list (active | inactive). 1. 50 hot pages on active list 50(h) | 0 2. workload: 50 newly created (used-once) pages 50(uo) | 50(h) 3. workload: another 50 newly created (used-once) pages 50(uo) | 50(uo), swap-out 50(h) This patch tries to fix this issue. Like as file LRU, newly created or swap-in anonymous pages will be inserted to the inactive list. They are promoted to active list if enough reference happens. This simple modification changes the above example as following. 1. 50 hot pages on active list 50(h) | 0 2. workload: 50 newly created (used-once) pages 50(h) | 50(uo) 3. workload: another 50 newly created (used-once) pages 50(h) | 50(uo), swap-out 50(uo) As you can see, hot pages on active list would be protected. Note that, this implementation has a drawback that the page cannot be promoted and will be swapped-out if re-access interval is greater than the size of inactive list but less than the size of total(active+inactive). To solve this potential issue, following patch will apply workingset detection that is applied to file LRU some day before. Signed-off-by: Joonsoo Kim --- include/linux/swap.h | 2 +- kernel/events/uprobes.c | 2 +- mm/huge_memory.c | 6 +++--- mm/khugepaged.c | 2 +- mm/memory.c | 9 ++++----- mm/migrate.c | 2 +- mm/swap.c | 13 +++++++------ mm/swapfile.c | 2 +- mm/userfaultfd.c | 2 +- mm/vmscan.c | 21 +++++++++++++++++++-- 10 files changed, 39 insertions(+), 22 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 1e99f7a..954e13e 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -344,7 +344,7 @@ extern void deactivate_page(struct page *page); extern void mark_page_lazyfree(struct page *page); extern void swap_setup(void); -extern void lru_cache_add_active_or_unevictable(struct page *page, +extern void lru_cache_add_inactive_or_unevictable(struct page *page, struct vm_area_struct *vma); /* linux/mm/vmscan.c */ diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index ece7e13..14156fc 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -190,7 +190,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, get_page(new_page); page_add_new_anon_rmap(new_page, vma, addr, false); mem_cgroup_commit_charge(new_page, memcg, false, false); - lru_cache_add_active_or_unevictable(new_page, vma); + lru_cache_add_inactive_or_unevictable(new_page, vma); } else /* no new page, just dec_mm_counter for old_page */ dec_mm_counter(mm, MM_ANONPAGES); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a880932..6356dfd 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -638,7 +638,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); page_add_new_anon_rmap(page, vma, haddr, true); mem_cgroup_commit_charge(page, memcg, false, true); - lru_cache_add_active_or_unevictable(page, vma); + lru_cache_add_inactive_or_unevictable(page, vma); pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry); add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR); @@ -1282,7 +1282,7 @@ static vm_fault_t do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, set_page_private(pages[i], 0); page_add_new_anon_rmap(pages[i], vmf->vma, haddr, false); mem_cgroup_commit_charge(pages[i], memcg, false, false); - lru_cache_add_active_or_unevictable(pages[i], vma); + lru_cache_add_inactive_or_unevictable(pages[i], vma); vmf->pte = pte_offset_map(&_pmd, haddr); VM_BUG_ON(!pte_none(*vmf->pte)); set_pte_at(vma->vm_mm, haddr, vmf->pte, entry); @@ -1435,7 +1435,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) pmdp_huge_clear_flush_notify(vma, haddr, vmf->pmd); page_add_new_anon_rmap(new_page, vma, haddr, true); mem_cgroup_commit_charge(new_page, memcg, false, true); - lru_cache_add_active_or_unevictable(new_page, vma); + lru_cache_add_inactive_or_unevictable(new_page, vma); set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry); update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); if (!page) { diff --git a/mm/khugepaged.c b/mm/khugepaged.c index b679908..246c155 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1092,7 +1092,7 @@ static void collapse_huge_page(struct mm_struct *mm, page_add_new_anon_rmap(new_page, vma, address, true); mem_cgroup_commit_charge(new_page, memcg, false, true); count_memcg_events(memcg, THP_COLLAPSE_ALLOC, 1); - lru_cache_add_active_or_unevictable(new_page, vma); + lru_cache_add_inactive_or_unevictable(new_page, vma); pgtable_trans_huge_deposit(mm, pmd, pgtable); set_pmd_at(mm, address, pmd, _pmd); update_mmu_cache_pmd(vma, address, pmd); diff --git a/mm/memory.c b/mm/memory.c index 45442d9..5f7813a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2513,7 +2513,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) ptep_clear_flush_notify(vma, vmf->address, vmf->pte); page_add_new_anon_rmap(new_page, vma, vmf->address, false); mem_cgroup_commit_charge(new_page, memcg, false, false); - lru_cache_add_active_or_unevictable(new_page, vma); + lru_cache_add_inactive_or_unevictable(new_page, vma); /* * We call the notify macro here because, when using secondary * mmu page tables (such as kvm shadow page tables), we want the @@ -3038,11 +3038,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (unlikely(page != swapcache && swapcache)) { page_add_new_anon_rmap(page, vma, vmf->address, false); mem_cgroup_commit_charge(page, memcg, false, false); - lru_cache_add_active_or_unevictable(page, vma); + lru_cache_add_inactive_or_unevictable(page, vma); } else { do_page_add_anon_rmap(page, vma, vmf->address, exclusive); mem_cgroup_commit_charge(page, memcg, true, false); - activate_page(page); } swap_free(entry); @@ -3186,7 +3185,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); page_add_new_anon_rmap(page, vma, vmf->address, false); mem_cgroup_commit_charge(page, memcg, false, false); - lru_cache_add_active_or_unevictable(page, vma); + lru_cache_add_inactive_or_unevictable(page, vma); setpte: set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry); @@ -3449,7 +3448,7 @@ vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg, inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); page_add_new_anon_rmap(page, vma, vmf->address, false); mem_cgroup_commit_charge(page, memcg, false, false); - lru_cache_add_active_or_unevictable(page, vma); + lru_cache_add_inactive_or_unevictable(page, vma); } else { inc_mm_counter_fast(vma->vm_mm, mm_counter_file(page)); page_add_file_rmap(page, false); diff --git a/mm/migrate.c b/mm/migrate.c index 86873b6..ef034c0 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2784,7 +2784,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, page_add_new_anon_rmap(page, vma, addr, false); mem_cgroup_commit_charge(page, memcg, false, false); if (!is_zone_device_page(page)) - lru_cache_add_active_or_unevictable(page, vma); + lru_cache_add_inactive_or_unevictable(page, vma); get_page(page); if (flush) { diff --git a/mm/swap.c b/mm/swap.c index 5341ae9..18b2735 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -448,23 +448,24 @@ void lru_cache_add(struct page *page) } /** - * lru_cache_add_active_or_unevictable + * lru_cache_add_inactive_or_unevictable * @page: the page to be added to LRU * @vma: vma in which page is mapped for determining reclaimability * - * Place @page on the active or unevictable LRU list, depending on its + * Place @page on the inactive or unevictable LRU list, depending on its * evictability. Note that if the page is not evictable, it goes * directly back onto it's zone's unevictable list, it does NOT use a * per cpu pagevec. */ -void lru_cache_add_active_or_unevictable(struct page *page, +void lru_cache_add_inactive_or_unevictable(struct page *page, struct vm_area_struct *vma) { + bool evictable; + VM_BUG_ON_PAGE(PageLRU(page), page); - if (likely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED)) - SetPageActive(page); - else if (!TestSetPageMlocked(page)) { + evictable = (vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED; + if (!evictable && !TestSetPageMlocked(page)) { /* * We use the irq-unsafe __mod_zone_page_stat because this * counter is not modified from interrupt context, and the pte diff --git a/mm/swapfile.c b/mm/swapfile.c index bb3261d..6bdcbf9 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1888,7 +1888,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, } else { /* ksm created a completely new copy */ page_add_new_anon_rmap(page, vma, addr, false); mem_cgroup_commit_charge(page, memcg, false, false); - lru_cache_add_active_or_unevictable(page, vma); + lru_cache_add_inactive_or_unevictable(page, vma); } swap_free(entry); /* diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 1b0d7ab..875e329 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -120,7 +120,7 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm, inc_mm_counter(dst_mm, MM_ANONPAGES); page_add_new_anon_rmap(page, dst_vma, dst_addr, false); mem_cgroup_commit_charge(page, memcg, false, false); - lru_cache_add_active_or_unevictable(page, dst_vma); + lru_cache_add_inactive_or_unevictable(page, dst_vma); set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); diff --git a/mm/vmscan.c b/mm/vmscan.c index e772f3f..4122a84 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1010,8 +1010,15 @@ static enum page_references page_check_references(struct page *page, return PAGEREF_RECLAIM; if (referenced_ptes) { - if (PageSwapBacked(page)) - return PAGEREF_ACTIVATE; + if (PageSwapBacked(page)) { + if (referenced_page) { + ClearPageReferenced(page); + return PAGEREF_ACTIVATE; + } + + SetPageReferenced(page); + return PAGEREF_KEEP; + } /* * All mapped pages start out with page table * references from the instantiating fault, so we need @@ -2056,6 +2063,15 @@ static void shrink_active_list(unsigned long nr_to_scan, } } + /* + * Now, newly created anonymous page isn't appened to the + * active list. We don't need to clear the reference bit here. + */ + if (PageSwapBacked(page)) { + ClearPageReferenced(page); + goto deactivate; + } + if (page_referenced(page, 0, sc->target_mem_cgroup, &vm_flags)) { nr_rotated += hpage_nr_pages(page); @@ -2074,6 +2090,7 @@ static void shrink_active_list(unsigned long nr_to_scan, } } +deactivate: ClearPageActive(page); /* we are de-activating */ SetPageWorkingset(page); list_add(&page->lru, &l_inactive); From patchwork Tue Feb 11 06:19:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11374865 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 12459139A for ; Tue, 11 Feb 2020 06:20:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B810320714 for ; Tue, 11 Feb 2020 06:20:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="s4+voYbV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B810320714 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 541866B028B; Tue, 11 Feb 2020 01:20:21 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 47B516B028C; Tue, 11 Feb 2020 01:20:21 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2F3E26B028D; Tue, 11 Feb 2020 01:20:21 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0194.hostedemail.com [216.40.44.194]) by kanga.kvack.org (Postfix) with ESMTP id 0A4676B028B for ; Tue, 11 Feb 2020 01:20:21 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D363C180AD804 for ; Tue, 11 Feb 2020 06:20:20 +0000 (UTC) X-FDA: 76476846600.18.knife70_2f3c0395ba55 X-Spam-Summary: 2,0,0,1a47f592cbe01d95,d41d8cd98f00b204,js1304@gmail.com,:akpm@linux-foundation.org::linux-kernel@vger.kernel.org:hannes@cmpxchg.org:mhocko@kernel.org:hughd@google.com:minchan@kernel.org:vbabka@suse.cz:mgorman@techsingularity.net:kernel-team@lge.com:iamjoonsoo.kim@lge.com,RULES_HIT:1:2:41:69:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1605:1730:1747:1777:1792:2198:2199:2393:2559:2562:2693:2914:3138:3139:3140:3141:3142:3308:3865:3866:3867:3868:3870:3871:3872:3874:4052:4250:4321:4605:5007:6261:6653:7576:8603:8957:9010:9413:9707:10004:11026:11232:11233:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12663:12679:12683:12895:14096:14394:21080:21325:21433:21444:21451:21627:21666:21740:21966:21990:30012:30045:30054:30056,0,RBL:209.85.216.66:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:no ne X-HE-Tag: knife70_2f3c0395ba55 X-Filterd-Recvd-Size: 12940 Received: from mail-pj1-f66.google.com (mail-pj1-f66.google.com [209.85.216.66]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Feb 2020 06:20:20 +0000 (UTC) Received: by mail-pj1-f66.google.com with SMTP id 12so789130pjb.5 for ; Mon, 10 Feb 2020 22:20:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=7Avex6fIjz8VzG2s8hbCGM/PIGfynu0axMhn0jy8Evg=; b=s4+voYbVU0sL1yPfuIpGf3cmkL4VVgWTaPItMOS45rR93A1azmZ4M/nEQTiNochTjz fajQFj6xt6WXjVkLM7p7nRbVz/ihD1O3BxLhAbq+0ypisfDVdop/fyW9SJaw7tsN7nq3 0qe46J0864LUHhGiZ1/hOv6sSP2btGClu+bjfz+qiVLcRZiXU9/9uSMoKecCNPLzKydO dwEEvydyUOvfWsIfjtjsHqMXgMFXSsDk+xvNygxdfUowkx+rZgeECGh37FRchYjL58rB L6h8b0g4dAPtGWnmHLEywVx7tozXKNeUckwx6ECIFHl4ll3c21i6JAC4dN2MMfO9/qGB 24Vg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=7Avex6fIjz8VzG2s8hbCGM/PIGfynu0axMhn0jy8Evg=; b=F1Rf16wXNjwQIIoizexqZr0ni2amhKIPkK82LLVljZUwSItiv0PynkXOarTm70SwsV 3dQku+EbXOiQcRyr8Ik4unIjKiU5x3KIOC3mBHobzT5u/eA9KH8g/JkYUkGT30wFL5Qe /YiIlCnTXLdeJvXIyApvu3Eof7CnViMKWibNYM19U9u/PlN8qE6f4Sqc1ctVqrxwcu36 kJ2ntAxlfhX7Gl7HNOuGB85ykbt6fgihbT/enzgq3c2v9E1Q/2lYa/jHc1Kpa6M7FvE5 XrWWL2i/2HdRrxw9hOGT+1ykgFqM560R5zlQceKBh30lzBJa0mT4xdEqwNs52sSq0pEU 26FQ== X-Gm-Message-State: APjAAAUatHjyHebKKLn+cTSvitmMvUWHJW+JH6yGWZsaw613iHQ2dhnr qAeCUn97xiiCSB9JTJ+/B2c= X-Google-Smtp-Source: APXvYqxwvJ/3JaW+1bVJTS3KPsrk9+Kl7st4oxNroF9MAzF0P3DWqNpT5ZIORPYkesP6yH2Pexh13w== X-Received: by 2002:a17:902:7d8c:: with SMTP id a12mr1723081plm.47.1581402019105; Mon, 10 Feb 2020 22:20:19 -0800 (PST) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id x197sm2578696pfc.1.2020.02.10.22.20.16 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 10 Feb 2020 22:20:18 -0800 (PST) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Hugh Dickins , Minchan Kim , Vlastimil Babka , Mel Gorman , kernel-team@lge.com, Joonsoo Kim Subject: [PATCH 3/9] mm/workingset: extend the workingset detection for anon LRU Date: Tue, 11 Feb 2020 15:19:47 +0900 Message-Id: <1581401993-20041-4-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1581401993-20041-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1581401993-20041-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim In the following patch, workingset detection will be applied to anonymous LRU. To prepare it, this patch adds some code to distinguish/handle the both LRUs. Signed-off-by: Joonsoo Kim Signed-off-by: Joonsoo Kim --- include/linux/mmzone.h | 14 +++++++++----- mm/memcontrol.c | 12 ++++++++---- mm/vmscan.c | 15 ++++++++++----- mm/vmstat.c | 6 ++++-- mm/workingset.c | 35 ++++++++++++++++++++++------------- 5 files changed, 53 insertions(+), 29 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 5334ad8..b78fd8c 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -220,8 +220,12 @@ enum node_stat_item { NR_ISOLATED_ANON, /* Temporary isolated pages from anon lru */ NR_ISOLATED_FILE, /* Temporary isolated pages from file lru */ WORKINGSET_NODES, - WORKINGSET_REFAULT, - WORKINGSET_ACTIVATE, + WORKINGSET_REFAULT_BASE, + WORKINGSET_REFAULT_ANON = WORKINGSET_REFAULT_BASE, + WORKINGSET_REFAULT_FILE, + WORKINGSET_ACTIVATE_BASE, + WORKINGSET_ACTIVATE_ANON = WORKINGSET_ACTIVATE_BASE, + WORKINGSET_ACTIVATE_FILE, WORKINGSET_RESTORE, WORKINGSET_NODERECLAIM, NR_ANON_MAPPED, /* Mapped anonymous pages */ @@ -304,10 +308,10 @@ enum lruvec_flags { struct lruvec { struct list_head lists[NR_LRU_LISTS]; struct zone_reclaim_stat reclaim_stat; - /* Evictions & activations on the inactive file list */ - atomic_long_t inactive_age; + /* Evictions & activations on the inactive list */ + atomic_long_t inactive_age[2]; /* Refaults at the time of last reclaim cycle */ - unsigned long refaults; + unsigned long refaults[2]; /* Various lruvec state flags (enum lruvec_flags) */ unsigned long flags; #ifdef CONFIG_MEMCG diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 6c83cf4..8f4473d 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1431,10 +1431,14 @@ static char *memory_stat_format(struct mem_cgroup *memcg) seq_buf_printf(&s, "%s %lu\n", vm_event_name(PGMAJFAULT), memcg_events(memcg, PGMAJFAULT)); - seq_buf_printf(&s, "workingset_refault %lu\n", - memcg_page_state(memcg, WORKINGSET_REFAULT)); - seq_buf_printf(&s, "workingset_activate %lu\n", - memcg_page_state(memcg, WORKINGSET_ACTIVATE)); + seq_buf_printf(&s, "workingset_refault_anon %lu\n", + memcg_page_state(memcg, WORKINGSET_REFAULT_ANON)); + seq_buf_printf(&s, "workingset_refault_file %lu\n", + memcg_page_state(memcg, WORKINGSET_REFAULT_FILE)); + seq_buf_printf(&s, "workingset_activate_anon %lu\n", + memcg_page_state(memcg, WORKINGSET_ACTIVATE_ANON)); + seq_buf_printf(&s, "workingset_activate_file %lu\n", + memcg_page_state(memcg, WORKINGSET_ACTIVATE_FILE)); seq_buf_printf(&s, "workingset_nodereclaim %lu\n", memcg_page_state(memcg, WORKINGSET_NODERECLAIM)); diff --git a/mm/vmscan.c b/mm/vmscan.c index 4122a84..74c3ade 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2735,7 +2735,10 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) if (!sc->force_deactivate) { unsigned long refaults; - if (inactive_is_low(target_lruvec, LRU_INACTIVE_ANON)) + refaults = lruvec_page_state(target_lruvec, + WORKINGSET_ACTIVATE_ANON); + if (refaults != target_lruvec->refaults[0] || + inactive_is_low(target_lruvec, LRU_INACTIVE_ANON)) sc->may_deactivate |= DEACTIVATE_ANON; else sc->may_deactivate &= ~DEACTIVATE_ANON; @@ -2746,8 +2749,8 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) * rid of any stale active pages quickly. */ refaults = lruvec_page_state(target_lruvec, - WORKINGSET_ACTIVATE); - if (refaults != target_lruvec->refaults || + WORKINGSET_ACTIVATE_FILE); + if (refaults != target_lruvec->refaults[1] || inactive_is_low(target_lruvec, LRU_INACTIVE_FILE)) sc->may_deactivate |= DEACTIVATE_FILE; else @@ -3026,8 +3029,10 @@ static void snapshot_refaults(struct mem_cgroup *target_memcg, pg_data_t *pgdat) unsigned long refaults; target_lruvec = mem_cgroup_lruvec(target_memcg, pgdat); - refaults = lruvec_page_state(target_lruvec, WORKINGSET_ACTIVATE); - target_lruvec->refaults = refaults; + refaults = lruvec_page_state(target_lruvec, WORKINGSET_ACTIVATE_ANON); + target_lruvec->refaults[0] = refaults; + refaults = lruvec_page_state(target_lruvec, WORKINGSET_ACTIVATE_FILE); + target_lruvec->refaults[1] = refaults; } /* diff --git a/mm/vmstat.c b/mm/vmstat.c index 78d5337..3cdf8e9 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1146,8 +1146,10 @@ const char * const vmstat_text[] = { "nr_isolated_anon", "nr_isolated_file", "workingset_nodes", - "workingset_refault", - "workingset_activate", + "workingset_refault_anon", + "workingset_refault_file", + "workingset_activate_anon", + "workingset_activate_file", "workingset_restore", "workingset_nodereclaim", "nr_anon_pages", diff --git a/mm/workingset.c b/mm/workingset.c index 474186b..d04f70a 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -15,6 +15,7 @@ #include #include #include +#include /* * Double CLOCK lists @@ -156,7 +157,7 @@ * * Implementation * - * For each node's file LRU lists, a counter for inactive evictions + * For each node's anon/file LRU lists, a counter for inactive evictions * and activations is maintained (node->inactive_age). * * On eviction, a snapshot of this counter (along with some bits to @@ -213,7 +214,8 @@ static void unpack_shadow(void *shadow, int *memcgidp, pg_data_t **pgdat, *workingsetp = workingset; } -static void advance_inactive_age(struct mem_cgroup *memcg, pg_data_t *pgdat) +static void advance_inactive_age(struct mem_cgroup *memcg, pg_data_t *pgdat, + int is_file) { /* * Reclaiming a cgroup means reclaiming all its children in a @@ -230,7 +232,7 @@ static void advance_inactive_age(struct mem_cgroup *memcg, pg_data_t *pgdat) struct lruvec *lruvec; lruvec = mem_cgroup_lruvec(memcg, pgdat); - atomic_long_inc(&lruvec->inactive_age); + atomic_long_inc(&lruvec->inactive_age[is_file]); } while (memcg && (memcg = parent_mem_cgroup(memcg))); } @@ -248,18 +250,19 @@ void *workingset_eviction(struct page *page, struct mem_cgroup *target_memcg) unsigned long eviction; struct lruvec *lruvec; int memcgid; + int is_file = page_is_file_cache(page); /* Page is fully exclusive and pins page->mem_cgroup */ VM_BUG_ON_PAGE(PageLRU(page), page); VM_BUG_ON_PAGE(page_count(page), page); VM_BUG_ON_PAGE(!PageLocked(page), page); - advance_inactive_age(page_memcg(page), pgdat); + advance_inactive_age(page_memcg(page), pgdat, is_file); lruvec = mem_cgroup_lruvec(target_memcg, pgdat); /* XXX: target_memcg can be NULL, go through lruvec */ memcgid = mem_cgroup_id(lruvec_memcg(lruvec)); - eviction = atomic_long_read(&lruvec->inactive_age); + eviction = atomic_long_read(&lruvec->inactive_age[is_file]); return pack_shadow(memcgid, pgdat, eviction, PageWorkingset(page)); } @@ -278,13 +281,16 @@ void workingset_refault(struct page *page, void *shadow) struct lruvec *eviction_lruvec; unsigned long refault_distance; struct pglist_data *pgdat; - unsigned long active_file; + unsigned long active; struct mem_cgroup *memcg; unsigned long eviction; struct lruvec *lruvec; unsigned long refault; bool workingset; int memcgid; + int is_file = page_is_file_cache(page); + enum lru_list active_lru = page_lru_base_type(page) + LRU_ACTIVE_FILE; + enum node_stat_item workingset_stat; unpack_shadow(shadow, &memcgid, &pgdat, &eviction, &workingset); @@ -309,8 +315,8 @@ void workingset_refault(struct page *page, void *shadow) if (!mem_cgroup_disabled() && !eviction_memcg) goto out; eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat); - refault = atomic_long_read(&eviction_lruvec->inactive_age); - active_file = lruvec_page_state(eviction_lruvec, NR_ACTIVE_FILE); + refault = atomic_long_read(&eviction_lruvec->inactive_age[is_file]); + active = lruvec_page_state(eviction_lruvec, active_lru); /* * Calculate the refault distance @@ -341,19 +347,21 @@ void workingset_refault(struct page *page, void *shadow) memcg = page_memcg(page); lruvec = mem_cgroup_lruvec(memcg, pgdat); - inc_lruvec_state(lruvec, WORKINGSET_REFAULT); + workingset_stat = WORKINGSET_REFAULT_BASE + is_file; + inc_lruvec_state(lruvec, workingset_stat); /* * Compare the distance to the existing workingset size. We * don't act on pages that couldn't stay resident even if all * the memory was available to the page cache. */ - if (refault_distance > active_file) + if (refault_distance > active) goto out; SetPageActive(page); - advance_inactive_age(memcg, pgdat); - inc_lruvec_state(lruvec, WORKINGSET_ACTIVATE); + advance_inactive_age(memcg, pgdat, is_file); + workingset_stat = WORKINGSET_ACTIVATE_BASE + is_file; + inc_lruvec_state(lruvec, workingset_stat); /* Page was active prior to eviction */ if (workingset) { @@ -371,6 +379,7 @@ void workingset_refault(struct page *page, void *shadow) void workingset_activation(struct page *page) { struct mem_cgroup *memcg; + int is_file = page_is_file_cache(page); rcu_read_lock(); /* @@ -383,7 +392,7 @@ void workingset_activation(struct page *page) memcg = page_memcg_rcu(page); if (!mem_cgroup_disabled() && !memcg) goto out; - advance_inactive_age(memcg, page_pgdat(page)); + advance_inactive_age(memcg, page_pgdat(page), is_file); out: rcu_read_unlock(); } From patchwork Tue Feb 11 06:19:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11374867 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1FB91139A for ; Tue, 11 Feb 2020 06:20:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D1C7620714 for ; Tue, 11 Feb 2020 06:20:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="n9tfPkK1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D1C7620714 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5650D6B028C; Tue, 11 Feb 2020 01:20:24 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 516476B028D; Tue, 11 Feb 2020 01:20:24 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 36CB16B028E; Tue, 11 Feb 2020 01:20:24 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0038.hostedemail.com [216.40.44.38]) by kanga.kvack.org (Postfix) with ESMTP id 12E5F6B028C for ; Tue, 11 Feb 2020 01:20:24 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C80A72C0C for ; Tue, 11 Feb 2020 06:20:23 +0000 (UTC) X-FDA: 76476846726.28.drum78_36547c5e2c33 X-Spam-Summary: 2,0,0,51df96a0393013ca,d41d8cd98f00b204,js1304@gmail.com,:akpm@linux-foundation.org::linux-kernel@vger.kernel.org:hannes@cmpxchg.org:mhocko@kernel.org:hughd@google.com:minchan@kernel.org:vbabka@suse.cz:mgorman@techsingularity.net:kernel-team@lge.com:iamjoonsoo.kim@lge.com,RULES_HIT:41:355:379:541:800:960:966:968:973:988:989:1260:1345:1359:1437:1535:1544:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2693:3138:3139:3140:3141:3142:3355:3865:3867:3868:3870:3871:3872:3874:4118:4321:4385:4605:5007:6261:6653:7576:7903:8957:9413:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12679:12895:14096:14181:14394:14721:21080:21444:21451:21627:21666:21740:21990:30054:30070,0,RBL:209.85.214.193:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:22,LUA_SUMMARY:none X-HE-Tag: drum78_36547c5e2c33 X-Filterd-Recvd-Size: 7889 Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Feb 2020 06:20:23 +0000 (UTC) Received: by mail-pl1-f193.google.com with SMTP id ay11so3850292plb.0 for ; Mon, 10 Feb 2020 22:20:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=G9zwvQ4CGCx+wiNpWyUGWH8dCEXbiZhNqior0NhHpUA=; b=n9tfPkK1fjAOHTlLKaFa+k8LbQzB2/CK675yrdWPuyM4ss5DnyfjSvRSaBRjYYexpS dSMz7FtJC4X9lQq2oka0D/xsqkJsvcpmqL4iS/3zl6p4ogp522U0vRyCBAaYr37EKyXU frEY2MV0uA5gBiB/NHAp3J++mWKWRjEqgn+AqnxNjRpUdTpt/WX+6g01AvtjBTnUKuIz Mh2WRDflJLYIk5VipctZnAJUB37iVJ/9RokHbnK+b3Eglh55lwUhmbw/3rWkjtMezLM2 loleEstXrqTQDOTogROzicLVX+AbL+UlJhy8slXJNGOWYca2Tq/jmwO3EjmQ+8CW4kOI 0rXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=G9zwvQ4CGCx+wiNpWyUGWH8dCEXbiZhNqior0NhHpUA=; b=amMtDXXlWfAq20xzv8F0iYK4N6B67R3aPL40AbJIz7Br07qcymb6e9AtBARhFLRgYt OT4d+bYQvhCV216X2ngw6ytvMMuFBJq48lWFhKlmFCMPgeTKthDVSxK2uGddVedVqxak LhScTqNx7rcaPBN1kzrTAWXRXNnc0RvXGcki8nfJD5TP4VhdmAg4U5o/9++noxw37k9c H9h6V1IH+FumFPKGZAOMiFhOufHlelc2WXIDc122/Bat635RzRYG+dh3D+fIW30yK2pz vJ5Ji2PgZYu1YYvoZZ0FEW9aze8hFh2MhgnCg/YKMVlgOCD2A1uZps7/mcA00USoUAtu i0XA== X-Gm-Message-State: APjAAAX3LvkZs6SJCxXuIO/K97jg4NYqZuKyxVzp5Q1lxnAvDK/O0vQo qdmTiYYNLynxMOm1bOZO5+A= X-Google-Smtp-Source: APXvYqw5LsCGe7EIWl3ejs3zGWMShWX3e1wYbs7zH/y/3zsONHTeNAWs6iHuWrM55hw4jlo5VBrG8g== X-Received: by 2002:a17:90a:20c4:: with SMTP id f62mr1955173pjg.70.1581402022272; Mon, 10 Feb 2020 22:20:22 -0800 (PST) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id x197sm2578696pfc.1.2020.02.10.22.20.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 10 Feb 2020 22:20:21 -0800 (PST) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Hugh Dickins , Minchan Kim , Vlastimil Babka , Mel Gorman , kernel-team@lge.com, Joonsoo Kim Subject: [PATCH 4/9] mm/swapcache: support to handle the value in swapcache Date: Tue, 11 Feb 2020 15:19:48 +0900 Message-Id: <1581401993-20041-5-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1581401993-20041-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1581401993-20041-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim Swapcache doesn't handle the value since there is no case using the value. In the following patch, workingset detection for anonymous page will be implemented and it stores the value into the swapcache. So, we need to handle it and this patch implement handling. Signed-off-by: Joonsoo Kim --- include/linux/swap.h | 5 +++-- mm/swap_state.c | 23 ++++++++++++++++++++--- mm/vmscan.c | 2 +- 3 files changed, 24 insertions(+), 6 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 954e13e..0df8b3f 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -410,7 +410,8 @@ extern void show_swap_cache_info(void); extern int add_to_swap(struct page *page); extern int add_to_swap_cache(struct page *, swp_entry_t, gfp_t); extern int __add_to_swap_cache(struct page *page, swp_entry_t entry); -extern void __delete_from_swap_cache(struct page *, swp_entry_t entry); +extern void __delete_from_swap_cache(struct page *page, + swp_entry_t entry, void *shadow); extern void delete_from_swap_cache(struct page *); extern void free_page_and_swap_cache(struct page *); extern void free_pages_and_swap_cache(struct page **, int); @@ -571,7 +572,7 @@ static inline int add_to_swap_cache(struct page *page, swp_entry_t entry, } static inline void __delete_from_swap_cache(struct page *page, - swp_entry_t entry) + swp_entry_t entry, void *shadow) { } diff --git a/mm/swap_state.c b/mm/swap_state.c index 8e7ce9a..3fbbe45 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -117,6 +117,10 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp) pgoff_t idx = swp_offset(entry); XA_STATE_ORDER(xas, &address_space->i_pages, idx, compound_order(page)); unsigned long i, nr = compound_nr(page); + unsigned long nrexceptional = 0; + void *old; + + xas_set_update(&xas, workingset_update_node); VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(PageSwapCache(page), page); @@ -132,10 +136,14 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp) goto unlock; for (i = 0; i < nr; i++) { VM_BUG_ON_PAGE(xas.xa_index != idx + i, page); + old = xas_load(&xas); + if (xa_is_value(old)) + nrexceptional++; set_page_private(page + i, entry.val + i); xas_store(&xas, page); xas_next(&xas); } + address_space->nrexceptional -= nrexceptional; address_space->nrpages += nr; __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, nr); ADD_CACHE_INFO(add_total, nr); @@ -155,24 +163,33 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp) * This must be called only on pages that have * been verified to be in the swap cache. */ -void __delete_from_swap_cache(struct page *page, swp_entry_t entry) +void __delete_from_swap_cache(struct page *page, + swp_entry_t entry, void *shadow) { struct address_space *address_space = swap_address_space(entry); int i, nr = hpage_nr_pages(page); pgoff_t idx = swp_offset(entry); XA_STATE(xas, &address_space->i_pages, idx); + /* Do not apply workingset detection for the hugh page */ + if (nr > 1) + shadow = NULL; + + xas_set_update(&xas, workingset_update_node); + VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(!PageSwapCache(page), page); VM_BUG_ON_PAGE(PageWriteback(page), page); for (i = 0; i < nr; i++) { - void *entry = xas_store(&xas, NULL); + void *entry = xas_store(&xas, shadow); VM_BUG_ON_PAGE(entry != page, entry); set_page_private(page + i, 0); xas_next(&xas); } ClearPageSwapCache(page); + if (shadow) + address_space->nrexceptional += nr; address_space->nrpages -= nr; __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, -nr); ADD_CACHE_INFO(del_total, nr); @@ -247,7 +264,7 @@ void delete_from_swap_cache(struct page *page) struct address_space *address_space = swap_address_space(entry); xa_lock_irq(&address_space->i_pages); - __delete_from_swap_cache(page, entry); + __delete_from_swap_cache(page, entry, NULL); xa_unlock_irq(&address_space->i_pages); put_swap_page(page, entry); diff --git a/mm/vmscan.c b/mm/vmscan.c index 74c3ade..99588ba 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -909,7 +909,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page, if (PageSwapCache(page)) { swp_entry_t swap = { .val = page_private(page) }; mem_cgroup_swapout(page, swap); - __delete_from_swap_cache(page, swap); + __delete_from_swap_cache(page, swap, NULL); xa_unlock_irqrestore(&mapping->i_pages, flags); put_swap_page(page, swap); } else { From patchwork Tue Feb 11 06:19:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11374869 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 34D0F924 for ; Tue, 11 Feb 2020 06:20:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F345920714 for ; Tue, 11 Feb 2020 06:20:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="DSS0geKj" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F345920714 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A20226B028D; Tue, 11 Feb 2020 01:20:27 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9D0AE6B028E; Tue, 11 Feb 2020 01:20:27 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8BEE46B028F; Tue, 11 Feb 2020 01:20:27 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0105.hostedemail.com [216.40.44.105]) by kanga.kvack.org (Postfix) with ESMTP id 6D9336B028D for ; Tue, 11 Feb 2020 01:20:27 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 4206D180AD804 for ; Tue, 11 Feb 2020 06:20:27 +0000 (UTC) X-FDA: 76476846894.16.son63_3e6a6e5c2922 X-Spam-Summary: 2,0,0,90caab0ea681cc5f,d41d8cd98f00b204,js1304@gmail.com,:akpm@linux-foundation.org::linux-kernel@vger.kernel.org:hannes@cmpxchg.org:mhocko@kernel.org:hughd@google.com:minchan@kernel.org:vbabka@suse.cz:mgorman@techsingularity.net:kernel-team@lge.com:iamjoonsoo.kim@lge.com,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1535:1541:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:4250:5007:6261:6653:7576:7903:8603:9010:9413:9707:10004:11026:11658:11914:12043:12296:12297:12517:12519:12555:12679:12895:13069:13146:13161:13229:13230:13311:13357:14096:14181:14384:14394:14721:21080:21444:21451:21627:21666:21740:21990:30034:30054:30070,0,RBL:209.85.210.194:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:61,LUA_SUMMARY:none X-HE-Tag: son63_3e6a6e5c2922 X-Filterd-Recvd-Size: 5010 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Feb 2020 06:20:26 +0000 (UTC) Received: by mail-pf1-f194.google.com with SMTP id i6so4976957pfc.1 for ; Mon, 10 Feb 2020 22:20:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=460wxLtmI3ryLyEB72gt4NohfkSkdpIrWM1CKB3DHkA=; b=DSS0geKjBvmmPD4/kQwA8is8vXGeoq3qm8aAQKoYT340gjoRm+GbH0B6oaUzmWq0Tl zdAQjI0Sfa4yclVNnvBJolDNAlcpXIYuXFBJYYbcoYvndL5gmmoZjak8fzn2iOdnAiZJ D9FtGyOfilb7hubOaxzyMmpOVe1zuiIudY55+dR0WmxfX0dqilmw1zhmRqCQUxW5/4xo /xGGFzwCA3dq/IvGC2c0i/Ahqg7sJNNyX+keJjR3Q4/hVuBsgt9UEZj6xBRmZAgXE4Lj ebbyjtfbft3xxZ8lFaiXZLybmgSAB4aliromL7TwYm9Q3zD0109vDdqpx80O7X7PAc8G oN3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=460wxLtmI3ryLyEB72gt4NohfkSkdpIrWM1CKB3DHkA=; b=XxBMqGp/5UfxB01nrER/THZY9vpZS4ZjlV8IkJrUmnv7AZy0iLJYop/RS1HyvuonCK 9LGJdDxS3UTlyjkXjDBWJoSaX1irnr4Uy/1Y0Yl6Il7toLfn9RgryEkcF48U/j1Cszu0 2ZG2WrxMW9OcwJKTqNuCMfYLEccwDXP6wpIinjCVQwkQjM3qNWjYZ2pHQoKCYW0ir6eY XyoQoQ/5UP7H5jFLHOb+mgKSuAdzKDWKgLZhB1NZpvH33D8ZYTeIo2x4P/7aIngN9gQQ L9rMSBRkUeLAomo3UT85+Htg3B5veJLGHvTB4AGimVnUxQ5i+gDcKFjDC4lUfH6rWI1p T49w== X-Gm-Message-State: APjAAAWkYTtc60QOiwJDG7Efmt3kHSezSpIKxE8bp28dB9keeCxvbL3n M66Ua3tBkMLqEdAH0R6+GRw= X-Google-Smtp-Source: APXvYqyblQHupVZhudVVjhgl7nOM31gRBTNsRCyR9TAXoqAUtSeTf2m8XEotm90O3yBPeMp5oe/jDw== X-Received: by 2002:a63:7457:: with SMTP id e23mr5451770pgn.386.1581402025451; Mon, 10 Feb 2020 22:20:25 -0800 (PST) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id x197sm2578696pfc.1.2020.02.10.22.20.22 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 10 Feb 2020 22:20:25 -0800 (PST) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Hugh Dickins , Minchan Kim , Vlastimil Babka , Mel Gorman , kernel-team@lge.com, Joonsoo Kim Subject: [PATCH 5/9] mm/workingset: use the node counter if memcg is the root memcg Date: Tue, 11 Feb 2020 15:19:49 +0900 Message-Id: <1581401993-20041-6-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1581401993-20041-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1581401993-20041-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim In the following patch, workingset detection is implemented for the swap cache. Swap cache's node is usually allocated by kswapd and it isn't charged by kmemcg since it is from the kernel thread. So the swap cache's shadow node is managed by the node list of the list_lru rather than the memcg specific one. If counting the shadow node on the root memcg happens to reclaim the slab object, the shadow node count returns the number of the shadow node on the node list of the list_lru since root memcg has the kmem_cache_id, -1. However, the size of pages on the LRU is calculated by using the specific memcg, so mismatch happens. This causes the number of shadow node not to be increased to the enough size and, therefore, workingset detection cannot work correctly. This patch fixes this bug by checking if the memcg is the root memcg or not. If it is the root memcg, instead of using the memcg-specific LRU, the system-wide LRU is used to calculate proper size of the shadow node so that the number of the shadow node can grow as expected. Signed-off-by: Joonsoo Kim --- mm/workingset.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/mm/workingset.c b/mm/workingset.c index d04f70a..636aafc 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -468,7 +468,13 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker, * PAGE_SIZE / xa_nodes / node_entries * 8 / PAGE_SIZE */ #ifdef CONFIG_MEMCG - if (sc->memcg) { + /* + * Kernel allocation on root memcg isn't regarded as allocation of + * specific memcg. So, if sc->memcg is the root memcg, we need to + * use the count for the node rather than one for the specific + * memcg. + */ + if (sc->memcg && !mem_cgroup_is_root(sc->memcg)) { struct lruvec *lruvec; int i; From patchwork Tue Feb 11 06:19:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11374871 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2DB03139A for ; Tue, 11 Feb 2020 06:20:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EDDF820714 for ; Tue, 11 Feb 2020 06:20:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="h8+gkVEx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EDDF820714 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A46726B028E; Tue, 11 Feb 2020 01:20:30 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9D0916B028F; Tue, 11 Feb 2020 01:20:30 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 873436B0290; Tue, 11 Feb 2020 01:20:30 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0032.hostedemail.com [216.40.44.32]) by kanga.kvack.org (Postfix) with ESMTP id 63A866B028E for ; Tue, 11 Feb 2020 01:20:30 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2BBC6824556B for ; Tue, 11 Feb 2020 06:20:30 +0000 (UTC) X-FDA: 76476847020.04.rail32_453b79ef155f X-Spam-Summary: 2,0,0,8aadea230e2a8401,d41d8cd98f00b204,js1304@gmail.com,:akpm@linux-foundation.org::linux-kernel@vger.kernel.org:hannes@cmpxchg.org:mhocko@kernel.org:hughd@google.com:minchan@kernel.org:vbabka@suse.cz:mgorman@techsingularity.net:kernel-team@lge.com:iamjoonsoo.kim@lge.com,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1534:1541:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3352:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:5007:6261:6653:7576:8957:9413:9707:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:13069:13311:13357:14096:14181:14384:14394:14721:21080:21444:21451:21627:21666:21740:21939:21990:30012:30054,0,RBL:209.85.214.195:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: rail32_453b79ef155f X-Filterd-Recvd-Size: 4412 Received: from mail-pl1-f195.google.com (mail-pl1-f195.google.com [209.85.214.195]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Feb 2020 06:20:29 +0000 (UTC) Received: by mail-pl1-f195.google.com with SMTP id g6so3846817plt.2 for ; Mon, 10 Feb 2020 22:20:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=vhjcjAXSe/y4wvj+GJktp0lCL/rhZzJbEkZv0Rz8J3s=; b=h8+gkVEx9vXkAAQFIngjWuVqspD+lGBkSftfueODYCJIA221nWA1wOUmU3avSebYhZ P2VcGSpdUnlXcgiZlM863mQ/v8v/w/5R38HzPKONS+l0YXIwOK46aJaEF38GVXQRufBx J25LMmyM9hUSO5x6cw2ri3kk0DTDR+FPfXwkhuOWlDTARBuwDokhkjfxEKFRP4tbU+l+ ZmcZaAuIdbYKwJ9P7VemfBjeQTkzE4WAfvh7uV7fW9Y1LDbbJ/OSHq/eSBNCl8N/SNGq QcKHCxyaRTtcnwx064m4RV+LYWzqZ2JPRtXpgQco0TuzPg/dUNGFUccyjm4WhIw5ZWBG q9AA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=vhjcjAXSe/y4wvj+GJktp0lCL/rhZzJbEkZv0Rz8J3s=; b=Q7faix49E4MULQ9KLfBgNwft7JgXWPMcrItnbDeJQq+gkY4ZvyyWs+//HxkKheKKpS e5hkzXZe1G3PLqyOrLHxODW2CCgJolPvhArdoL21x9IQ4BTp1tFawfX20sCRCBD+lFy2 O5KLpq41Exs0xAo7kogXNcxzFRb25+TeNXcA6+SZ4RvdNonf8tkmAiBcb1nNq27/vhbs btNtjhG0Go4iBtQJ0K24+biO2T25LEfz5/EPqYGDEfzBfD1U8GSYRALA46zr20fdBSxU T1mxV61cNffOUBaB3pkhNU2BewW4KM9JRcnhuWA7mawOvz6MeU4QVbRoAwt5KW7Ga4xU +T0w== X-Gm-Message-State: APjAAAXN51bvYM56ymQG5A/KmrEd2eVDCuqzx719goCveEfEtUF7LE5C ZV3MwE3Z19BLeB85swFl1bk= X-Google-Smtp-Source: APXvYqx8Sx/tt2hOQkWb50Oo0CRZIN6lgR3hNkyRJ7/KCKenmmV9Y0FjzMQmQ/umdIv26f8bJka6Qg== X-Received: by 2002:a17:90a:8586:: with SMTP id m6mr1854877pjn.121.1581402028634; Mon, 10 Feb 2020 22:20:28 -0800 (PST) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id x197sm2578696pfc.1.2020.02.10.22.20.25 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 10 Feb 2020 22:20:28 -0800 (PST) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Hugh Dickins , Minchan Kim , Vlastimil Babka , Mel Gorman , kernel-team@lge.com, Joonsoo Kim Subject: [PATCH 6/9] mm/workingset: handle the page without memcg Date: Tue, 11 Feb 2020 15:19:50 +0900 Message-Id: <1581401993-20041-7-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1581401993-20041-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1581401993-20041-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim When implementing workingset detection for anonymous page, I found some swapcache pages with NULL memcg. From the code reading, I found two reasons. One is the case that swap-in readahead happens. The other is the corner case related to the shmem cache. These two problems should be fixed, but, it's not straight-forward to fix. For example, when swap-off, all swapped-out pages are read into swapcache. In this case, who's the owner of the swapcache page? Since this problem doesn't look trivial, I decide to leave the issue and handles this corner case on the place where the error occurs. Signed-off-by: Joonsoo Kim --- mm/workingset.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/mm/workingset.c b/mm/workingset.c index 636aafc..20286d6 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -257,6 +257,10 @@ void *workingset_eviction(struct page *page, struct mem_cgroup *target_memcg) VM_BUG_ON_PAGE(page_count(page), page); VM_BUG_ON_PAGE(!PageLocked(page), page); + /* page_memcg() can be NULL if swap-in readahead happens */ + if (!page_memcg(page)) + return NULL; + advance_inactive_age(page_memcg(page), pgdat, is_file); lruvec = mem_cgroup_lruvec(target_memcg, pgdat); From patchwork Tue Feb 11 06:19:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11374873 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AED09139A for ; Tue, 11 Feb 2020 06:20:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6B3AF20714 for ; Tue, 11 Feb 2020 06:20:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="LaRN70Br" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6B3AF20714 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F28926B028F; Tue, 11 Feb 2020 01:20:33 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ED8E46B0290; Tue, 11 Feb 2020 01:20:33 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D7AE06B0291; Tue, 11 Feb 2020 01:20:33 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0123.hostedemail.com [216.40.44.123]) by kanga.kvack.org (Postfix) with ESMTP id B70966B028F for ; Tue, 11 Feb 2020 01:20:33 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 8588321F0 for ; Tue, 11 Feb 2020 06:20:33 +0000 (UTC) X-FDA: 76476847146.24.vein96_4cfcc00c4860 X-Spam-Summary: 2,0,0,2e2dbcb780e50d63,d41d8cd98f00b204,js1304@gmail.com,:akpm@linux-foundation.org::linux-kernel@vger.kernel.org:hannes@cmpxchg.org:mhocko@kernel.org:hughd@google.com:minchan@kernel.org:vbabka@suse.cz:mgorman@techsingularity.net:kernel-team@lge.com:iamjoonsoo.kim@lge.com,RULES_HIT:2:41:355:379:541:800:960:966:968:973:988:989:1260:1345:1359:1437:1535:1730:1747:1777:1792:2194:2196:2199:2200:2393:2553:2559:2562:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3871:3872:4049:4120:4321:4385:4605:5007:6261:6653:7576:7903:8957:9413:9707:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:12986:13161:13229:14394:21080:21324:21444:21451:21627:21666:21740:21990:30003:30054:30075:30090,0,RBL:209.85.210.195:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: vein96_4cfcc00c4860 X-Filterd-Recvd-Size: 9877 Received: from mail-pf1-f195.google.com (mail-pf1-f195.google.com [209.85.210.195]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Feb 2020 06:20:32 +0000 (UTC) Received: by mail-pf1-f195.google.com with SMTP id n7so4976993pfn.0 for ; Mon, 10 Feb 2020 22:20:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=AenkQkksoc/yFcCBODnk1JaW8ZbQido5XpjljiUkdfE=; b=LaRN70Br295UBvIZS20z+DmvixdCgfh1IgpHn9+dH2zda9bmelLvX11gdIH+c3r46j XKp3Oc4OaIvTMVr9mxvrVPIadb0TdyFK1OvB4i6S+IlqSkSPs8XUtMMVG/VyphGxgnt6 2DAVoZ0pgARGTiXWl8Pzm1/rcILKOUbUa7IN9LdUr8rrhM8cbsogqvbuaO+27QGhUzxc hjR5IV/ugR0BVfH7I4Kfg6DDqBKGT4JGW9rtHl3dRleNER1DtYOLmmwhA3h7HjHo/t3b UsMc+m9SBQLZcYtDwlkpthsty3krEWA3XDp457KSYJfV8egKKdC3YGQWsDJ1h3Q2NfGI 9TpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=AenkQkksoc/yFcCBODnk1JaW8ZbQido5XpjljiUkdfE=; b=WD+LkwnsgGMw1aeEJo6Hb6nozlXJg7mLu1n8DfO6P2x5oIFUNMYcNR0JbWTPfGjQ0j 8hIi2cWHIgPeSJpmhvpWfsZsMDMhA0JGpMnXbIf/Z/K2bl0g/u0ud8WH8wMR4JMwWlXT V6UztaDbbRrHQfou/NpRmPC6xqpRawA7TgkadHUsjpnWfBBrSkXilDvvSaK8FwPDXp5P vySK/uJzzCE1b9l5hdmGehYW6m6C4+UXci1wGeDuAaFbF6PrMeUmMi+HsISMUUpWEjV6 uLYbYl7331UEQD2TVG+JB4VUhNqo26kZzoqyHhWwRu2eSsx8IwU00N8/mpTWJDzjxEy0 pCLA== X-Gm-Message-State: APjAAAUjq39NWWt07K/XEbqQw5//bGHoR3aRQBupna/Os3YSv4KIyp/p W9c9oTdgqATa19VKzjzuCJZvqaEBJz8= X-Google-Smtp-Source: APXvYqzGxv7knBrtLZts0NFu0AAUKXXT4QT5aUdNTD1EOrPWSq4qin6AKKkr1DGB2iunM7LqWiPX0g== X-Received: by 2002:a62:1594:: with SMTP id 142mr4914620pfv.18.1581402031874; Mon, 10 Feb 2020 22:20:31 -0800 (PST) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id x197sm2578696pfc.1.2020.02.10.22.20.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 10 Feb 2020 22:20:31 -0800 (PST) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Hugh Dickins , Minchan Kim , Vlastimil Babka , Mel Gorman , kernel-team@lge.com, Joonsoo Kim Subject: [PATCH 7/9] mm/swap: implement workingset detection for anonymous LRU Date: Tue, 11 Feb 2020 15:19:51 +0900 Message-Id: <1581401993-20041-8-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1581401993-20041-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1581401993-20041-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim This patch implements workingset detection for anonymous LRU. All the infrastructure is implemented by the previous patches so this patch just activates the workingset detection by installing/retrieving the shadow entry. Signed-off-by: Joonsoo Kim --- include/linux/swap.h | 11 +++++++++-- mm/memory.c | 7 ++++++- mm/shmem.c | 3 ++- mm/swap_state.c | 31 ++++++++++++++++++++++++++----- mm/vmscan.c | 7 +++++-- 5 files changed, 48 insertions(+), 11 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 0df8b3f..fb4772e 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -408,7 +408,9 @@ extern struct address_space *swapper_spaces[]; extern unsigned long total_swapcache_pages(void); extern void show_swap_cache_info(void); extern int add_to_swap(struct page *page); -extern int add_to_swap_cache(struct page *, swp_entry_t, gfp_t); +extern void *get_shadow_from_swap_cache(swp_entry_t entry); +extern int add_to_swap_cache(struct page *page, swp_entry_t entry, + gfp_t gfp, void **shadowp); extern int __add_to_swap_cache(struct page *page, swp_entry_t entry); extern void __delete_from_swap_cache(struct page *page, swp_entry_t entry, void *shadow); @@ -565,8 +567,13 @@ static inline int add_to_swap(struct page *page) return 0; } +static inline void *get_shadow_from_swap_cache(swp_entry_t entry) +{ + return NULL; +} + static inline int add_to_swap_cache(struct page *page, swp_entry_t entry, - gfp_t gfp_mask) + gfp_t gfp_mask, void **shadowp) { return -1; } diff --git a/mm/memory.c b/mm/memory.c index 5f7813a..91a2097 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2925,10 +2925,15 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, vmf->address); if (page) { + void *shadow; + __SetPageLocked(page); __SetPageSwapBacked(page); set_page_private(page, entry.val); - lru_cache_add_anon(page); + shadow = get_shadow_from_swap_cache(entry); + if (shadow) + workingset_refault(page, shadow); + lru_cache_add(page); swap_readpage(page, true); } } else { diff --git a/mm/shmem.c b/mm/shmem.c index 8793e8c..c6663ad 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1370,7 +1370,8 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) list_add(&info->swaplist, &shmem_swaplist); if (add_to_swap_cache(page, swap, - __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN) == 0) { + __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN, + NULL) == 0) { spin_lock_irq(&info->lock); shmem_recalc_inode(inode); info->swapped++; diff --git a/mm/swap_state.c b/mm/swap_state.c index 3fbbe45..7f7cb19 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -107,11 +107,24 @@ void show_swap_cache_info(void) printk("Total swap = %lukB\n", total_swap_pages << (PAGE_SHIFT - 10)); } +void *get_shadow_from_swap_cache(swp_entry_t entry) +{ + struct address_space *address_space = swap_address_space(entry); + pgoff_t idx = swp_offset(entry); + struct page *page; + + page = find_get_entry(address_space, idx); + if (xa_is_value(page)) + return page; + return NULL; +} + /* * add_to_swap_cache resembles add_to_page_cache_locked on swapper_space, * but sets SwapCache flag and private instead of mapping and index. */ -int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp) +int add_to_swap_cache(struct page *page, swp_entry_t entry, + gfp_t gfp, void **shadowp) { struct address_space *address_space = swap_address_space(entry); pgoff_t idx = swp_offset(entry); @@ -137,8 +150,11 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp) for (i = 0; i < nr; i++) { VM_BUG_ON_PAGE(xas.xa_index != idx + i, page); old = xas_load(&xas); - if (xa_is_value(old)) + if (xa_is_value(old)) { nrexceptional++; + if (shadowp) + *shadowp = old; + } set_page_private(page + i, entry.val + i); xas_store(&xas, page); xas_next(&xas); @@ -226,7 +242,7 @@ int add_to_swap(struct page *page) * Add it to the swap cache. */ err = add_to_swap_cache(page, entry, - __GFP_HIGH|__GFP_NOMEMALLOC|__GFP_NOWARN); + __GFP_HIGH|__GFP_NOMEMALLOC|__GFP_NOWARN, NULL); if (err) /* * add_to_swap_cache() doesn't return -EEXIST, so we can safely @@ -380,6 +396,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct page *found_page = NULL, *new_page = NULL; struct swap_info_struct *si; int err; + void *shadow; *new_page_allocated = false; do { @@ -435,11 +452,15 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, /* May fail (-ENOMEM) if XArray node allocation failed. */ __SetPageLocked(new_page); __SetPageSwapBacked(new_page); - err = add_to_swap_cache(new_page, entry, gfp_mask & GFP_KERNEL); + shadow = NULL; + err = add_to_swap_cache(new_page, entry, + gfp_mask & GFP_KERNEL, &shadow); if (likely(!err)) { /* Initiate read into locked page */ SetPageWorkingset(new_page); - lru_cache_add_anon(new_page); + if (shadow) + workingset_refault(new_page, shadow); + lru_cache_add(new_page); *new_page_allocated = true; return new_page; } diff --git a/mm/vmscan.c b/mm/vmscan.c index 99588ba..a1892e7 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -867,6 +867,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page, { unsigned long flags; int refcount; + void *shadow = NULL; BUG_ON(!PageLocked(page)); BUG_ON(mapping != page_mapping(page)); @@ -909,12 +910,13 @@ static int __remove_mapping(struct address_space *mapping, struct page *page, if (PageSwapCache(page)) { swp_entry_t swap = { .val = page_private(page) }; mem_cgroup_swapout(page, swap); - __delete_from_swap_cache(page, swap, NULL); + if (reclaimed && !mapping_exiting(mapping)) + shadow = workingset_eviction(page, target_memcg); + __delete_from_swap_cache(page, swap, shadow); xa_unlock_irqrestore(&mapping->i_pages, flags); put_swap_page(page, swap); } else { void (*freepage)(struct page *); - void *shadow = NULL; freepage = mapping->a_ops->freepage; /* @@ -1485,6 +1487,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, SetPageActive(page); stat->nr_activate[type] += nr_pages; count_memcg_page_event(page, PGACTIVATE); + workingset_activation(page); } keep_locked: unlock_page(page); From patchwork Tue Feb 11 06:19:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11374875 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CDC04924 for ; Tue, 11 Feb 2020 06:20:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 99BE321734 for ; Tue, 11 Feb 2020 06:20:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="mY1euNlO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 99BE321734 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2F3276B0290; Tue, 11 Feb 2020 01:20:37 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 27C206B0291; Tue, 11 Feb 2020 01:20:37 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F7C36B0292; Tue, 11 Feb 2020 01:20:37 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0198.hostedemail.com [216.40.44.198]) by kanga.kvack.org (Postfix) with ESMTP id E55406B0290 for ; Tue, 11 Feb 2020 01:20:36 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id A12002497 for ; Tue, 11 Feb 2020 06:20:36 +0000 (UTC) X-FDA: 76476847272.17.frogs84_543561493407 X-Spam-Summary: 2,0,0,1ecc4bb048387544,d41d8cd98f00b204,js1304@gmail.com,:akpm@linux-foundation.org::linux-kernel@vger.kernel.org:hannes@cmpxchg.org:mhocko@kernel.org:hughd@google.com:minchan@kernel.org:vbabka@suse.cz:mgorman@techsingularity.net:kernel-team@lge.com:iamjoonsoo.kim@lge.com,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1534:1540:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3352:3865:3866:3867:3871:4321:5007:6261:6653:7576:7875:9036:9413:10004:11026:11473:11658:11914:12296:12297:12517:12519:12555:12679:12895:12986:13069:13161:13229:13311:13357:14096:14130:14181:14384:14394:14721:21080:21444:21451:21627:21666:21740:30054:30069,0,RBL:209.85.210.193:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: frogs84_543561493407 X-Filterd-Recvd-Size: 3923 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Feb 2020 06:20:36 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id k29so4943629pfp.13 for ; Mon, 10 Feb 2020 22:20:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=hUsW4h0Q2Yi32kJtCDyUXPnl4z2/9q9YG773fr2bDbk=; b=mY1euNlOdCA2C6rfWChJVGDCpurwr/b+T7/UB0h6nIHFYoz71noGGGSGS29ZkpzL2I TVP8L96ffUqVW62xSZL0EkG3nN3QucnMY+9uLzlcIgG8qwC1w44yhCm6QzYVo9tcSOKA Z2Nt9zie6BA+MiOtky5RKTkOxWjd4/Zka+MzcvKNxpDRNBLvZCBKr6q9uwvjvUtaM7ng T1nerxeZjHRGYegeaSdyV9lv/HFnB/mkNU61rAo09yDTsQVbERs9C7ay+mXwtZFMwlcf Evjl4AKaTAsu+W6tVrMehcF/8DUc92pQv9nEZODLDMxcLNq8EPPUEpLROvtK0xswy764 EbAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=hUsW4h0Q2Yi32kJtCDyUXPnl4z2/9q9YG773fr2bDbk=; b=h5SdmVuEkxnlblg0pRJ6e4NjTWhSybbe7PRtNyZxBkKJQHYlI0TsPNzF8N3jxUtqnr qo6yhaRqW7Zh5RTHFMwUTqB2UEgkzjmFweXX8URCXpOFTLrFuOOVS1BUPHf+A+EGOQx6 SuoKGc6rB4NvlXuMG7MB0i6zJoNZY+FOLVNHFlRDQHCmiA19j+BmnVFYC169OU4d868k +VNeW3BVGPzxAo5m07Q5ORdt0n7Z4dPzc/iQkbNGqojCDXQw32KBNQTwxibXCMKeJsuX aKs0crVqacgBs3PjrJW/6sfjRnH3Uhmhy/waJnSDyBeG2kAG0G6J3wjqKdjyob/YjvjO hDew== X-Gm-Message-State: APjAAAVkX1xppjxnuZAlPqMHtwY3IwrC8ywjQG142PiygsItNdta8Jlb /K0XKu5NltHhy+E/RDESUSA= X-Google-Smtp-Source: APXvYqz/EunkFhFAPYs6HlqM6zE2fauxOTg+/R7GBZ29Zoc5coMAbYC6dlLPfOAP3A3lSOEydNGWUA== X-Received: by 2002:aa7:8b17:: with SMTP id f23mr1737594pfd.197.1581402035048; Mon, 10 Feb 2020 22:20:35 -0800 (PST) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id x197sm2578696pfc.1.2020.02.10.22.20.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 10 Feb 2020 22:20:34 -0800 (PST) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Hugh Dickins , Minchan Kim , Vlastimil Babka , Mel Gorman , kernel-team@lge.com, Joonsoo Kim Subject: [PATCH 8/9] mm/vmscan: restore active/inactive ratio for anonymous LRU Date: Tue, 11 Feb 2020 15:19:52 +0900 Message-Id: <1581401993-20041-9-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1581401993-20041-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1581401993-20041-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000049, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim Now, workingset detection is implemented for anonymous LRU. We don't have to worry about the misfound for workingset due to the ratio of active/inactive. Let's restore the ratio. Signed-off-by: Joonsoo Kim --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index a1892e7..81ff725 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2237,7 +2237,7 @@ static bool inactive_is_low(struct lruvec *lruvec, enum lru_list inactive_lru) active = lruvec_page_state(lruvec, NR_LRU_BASE + active_lru); gb = (inactive + active) >> (30 - PAGE_SHIFT); - if (gb && is_file_lru(inactive_lru)) + if (gb) inactive_ratio = int_sqrt(10 * gb); else inactive_ratio = 1; From patchwork Tue Feb 11 06:19:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11374877 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F3CE9924 for ; Tue, 11 Feb 2020 06:20:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B509E2082F for ; Tue, 11 Feb 2020 06:20:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BzmseVny" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B509E2082F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 711876B0291; Tue, 11 Feb 2020 01:20:41 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 69AF96B0292; Tue, 11 Feb 2020 01:20:41 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 563AA6B0293; Tue, 11 Feb 2020 01:20:41 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 363766B0291 for ; Tue, 11 Feb 2020 01:20:41 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id F0CE6181AEF00 for ; Tue, 11 Feb 2020 06:20:39 +0000 (UTC) X-FDA: 76476847440.21.desk35_5befb03ea51a X-Spam-Summary: 2,0,0,82c26c8c88a88384,d41d8cd98f00b204,js1304@gmail.com,:akpm@linux-foundation.org::linux-kernel@vger.kernel.org:hannes@cmpxchg.org:mhocko@kernel.org:hughd@google.com:minchan@kernel.org:vbabka@suse.cz:mgorman@techsingularity.net:kernel-team@lge.com:iamjoonsoo.kim@lge.com,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1535:1543:1711:1730:1747:1777:1792:2393:2559:2562:2693:2897:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3874:4117:4250:5007:6261:6653:7576:8957:9413:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12679:12895:13161:13229:14096:14181:14394:14721:21080:21444:21451:21627:21666:21740:30054,0,RBL:209.85.215.193:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: desk35_5befb03ea51a X-Filterd-Recvd-Size: 6361 Received: from mail-pg1-f193.google.com (mail-pg1-f193.google.com [209.85.215.193]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Feb 2020 06:20:39 +0000 (UTC) Received: by mail-pg1-f193.google.com with SMTP id d6so5158919pgn.5 for ; Mon, 10 Feb 2020 22:20:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=7LP0sMYBzl+D2JqY2Soygzltfk1AIpMooW3/Su409nk=; b=BzmseVnyVt2mzXgZKqt/KT8OSaDMTAG/OG9ViGWylPV390jMQ4NzWavfnpMYOKxJqU PJM7DSfyn3hga72bfGoOSLNm2iP3YoxosN9PpoUhJ3afpUx49INK0biaoCpglFvUpWJD i+r4jeuek+ERCuQv56g8ZchtqCZiszeL15Y9mpjugEA0upkB0d9DYBt7j562iQpmN1xe NlAKq5t1ljSmgYEZ+zAEPOKb6fjPrcXdbVipYtwRDNJ7aV0aexMTqnNoQ3kzzBDZbXjT O8/Hd+iY2HR7+UoogRctV5mmz1+bgDj/V73u6oBpdg8bRo5LB7JBtedzkhA9fItnE4B1 c/KQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=7LP0sMYBzl+D2JqY2Soygzltfk1AIpMooW3/Su409nk=; b=WigBxPVP3KigbDs9MwP0B4t7Gl59H6Ot7ZpDrrUohBywgMf0Df/PPSbIL1cKlKLljX k2j6p69E2tNnzz/dqu2AZ41Wkn00Q4RQIlm5115jMgO4CkjTjA3xRSFI0ZtHmiq8DqRX vaWvG8Xgq7PBHGEJdywKCtFo7c/WCIEiOwu0cA7MsWiveDAOysh6kYPiajSbnsIqYoxd OfTeBPhtriIcP6uoHnXfoO1Jknf3R9BetVUWYfCa3Ut1eW0s+SkzpfehhdzTXuHVa02Y 1MRoNPk9yE5IP+0egMGfteETpCJB6a0M0KUPQzPyTBtz9F9edcs9/G8oV3rjDSa+mT0H LOKw== X-Gm-Message-State: APjAAAXnIHK+NUo8ZXRUdmXMHZPaCApSvA2suIzj5k6AlbW4vGNFw5rM 5qqj31Fsh9hgZPAibRfp5lk= X-Google-Smtp-Source: APXvYqwD43b5f6a9ZK7182mcpuONNYGNgb3CyUpqtMDbfH9Kbcehamm0qCBK4GY4On3Rd6TY1b4ssA== X-Received: by 2002:a05:6a00:5b:: with SMTP id i27mr1781402pfk.112.1581402038202; Mon, 10 Feb 2020 22:20:38 -0800 (PST) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id x197sm2578696pfc.1.2020.02.10.22.20.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 10 Feb 2020 22:20:37 -0800 (PST) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Hugh Dickins , Minchan Kim , Vlastimil Babka , Mel Gorman , kernel-team@lge.com, Joonsoo Kim Subject: [PATCH 9/9] mm/swap: count a new anonymous page as a reclaim_state's rotate Date: Tue, 11 Feb 2020 15:19:53 +0900 Message-Id: <1581401993-20041-10-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1581401993-20041-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1581401993-20041-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim reclaim_stat's rotate is used for controlling the ratio of scanning page between file and anonymous LRU. All new anonymous pages are counted for rotate before the patch, protecting anonymous pages on active LRU, and, it makes that reclaim on anonymous LRU is less happened than file LRU. Now, situation is changed. all new anonymous pages are not added to the active LRU so rotate would be far less than before. It will cause that reclaim on anonymous LRU happens more and it would result in bad effect on some system that is optimized for previous setting. Therefore, this patch counts a new anonymous page as a reclaim_state's rotate. Although it is non-logical to add this count to the reclaim_state's rotate in current algorithm, reducing the regression would be more important. I found this regression on kernel-build test and it is roughly 2~5% performance degradation. With this workaround, performance is completely restored. Signed-off-by: Joonsoo Kim --- mm/swap.c | 27 ++++++++++++++++++++++++++- 1 file changed, 26 insertions(+), 1 deletion(-) diff --git a/mm/swap.c b/mm/swap.c index 18b2735..c3584af 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -187,6 +187,9 @@ int get_kernel_page(unsigned long start, int write, struct page **pages) } EXPORT_SYMBOL_GPL(get_kernel_page); +static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, + void *arg); + static void pagevec_lru_move_fn(struct pagevec *pvec, void (*move_fn)(struct page *page, struct lruvec *lruvec, void *arg), void *arg) @@ -207,6 +210,19 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, spin_lock_irqsave(&pgdat->lru_lock, flags); } + if (move_fn == __pagevec_lru_add_fn) { + struct list_head *entry = &page->lru; + unsigned long next = (unsigned long)entry->next; + unsigned long rotate = next & 2; + + if (rotate) { + VM_BUG_ON(arg); + + next = next & ~2; + entry->next = (struct list_head *)next; + arg = (void *)rotate; + } + } lruvec = mem_cgroup_page_lruvec(page, pgdat); (*move_fn)(page, lruvec, arg); } @@ -475,6 +491,14 @@ void lru_cache_add_inactive_or_unevictable(struct page *page, hpage_nr_pages(page)); count_vm_event(UNEVICTABLE_PGMLOCKED); } + + if (PageSwapBacked(page) && evictable) { + struct list_head *entry = &page->lru; + unsigned long next = (unsigned long)entry->next; + + next = next | 2; + entry->next = (struct list_head *)next; + } lru_cache_add(page); } @@ -927,6 +951,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, { enum lru_list lru; int was_unevictable = TestClearPageUnevictable(page); + unsigned long rotate = (unsigned long)arg; VM_BUG_ON_PAGE(PageLRU(page), page); @@ -962,7 +987,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, if (page_evictable(page)) { lru = page_lru(page); update_page_reclaim_stat(lruvec, page_is_file_cache(page), - PageActive(page)); + PageActive(page) | rotate); if (was_unevictable) count_vm_event(UNEVICTABLE_PGRESCUED); } else {