From patchwork Mon Mar 23 05:52:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11452375 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D7B591731 for ; Mon, 23 Mar 2020 05:52:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A36F620732 for ; Mon, 23 Mar 2020 05:52:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="sT2oaupD" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A36F620732 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C1AB56B0006; Mon, 23 Mar 2020 01:52:33 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BCAE86B0007; Mon, 23 Mar 2020 01:52:33 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE41D6B0008; Mon, 23 Mar 2020 01:52:33 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0227.hostedemail.com [216.40.44.227]) by kanga.kvack.org (Postfix) with ESMTP id 96F2F6B0006 for ; Mon, 23 Mar 2020 01:52:33 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D23FC181AC9BF for ; Mon, 23 Mar 2020 05:52:33 +0000 (UTC) X-FDA: 76625557386.29.lamp90_6da7001e8f642 X-Spam-Summary: 2,0,0,87e8ec38b9052125,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1534:1541:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2693:2731:2897:3138:3139:3140:3141:3142:3352:3865:3866:3867:3868:3870:3871:3872:3874:4250:5007:6261:6653:7576:7875:9010:9036:9413:10004:11026:11473:11658:11914:12296:12297:12517:12519:12555:12679:12895:13069:13141:13230:13311:13357:14096:14181:14384:14394:14721:21080:21444:21451:21627:21666:21740:30054:30069,0,RBL:209.85.215.193:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: lamp90_6da7001e8f642 X-Filterd-Recvd-Size: 4492 Received: from mail-pg1-f193.google.com (mail-pg1-f193.google.com [209.85.215.193]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Mon, 23 Mar 2020 05:52:33 +0000 (UTC) Received: by mail-pg1-f193.google.com with SMTP id t24so6639945pgj.7 for ; Sun, 22 Mar 2020 22:52:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=U7jG+Redj2n/Yx20QRQJyqa8ihVuhx36qawgSQPthFA=; b=sT2oaupDw/Yl/87fmDzroo8tDxr2qptcfNoatb3rT/4/Upulp99BL6swyzEqXhNEM6 poIZbN0EMmY2lPh1NM+GeaLO3kXQGAsAmM1wAYlCuKBRXGpOzfgRbhCBilZhoKgZljzD XjGqt0A/fAixNeIFKqooAkjFaHtI9MA0MEqW6OK8nkGC/zRDCV6A2eXbwDqQz4IIO8OP gXRZFTp4w9Qel84PdQHiVKH8oGwOLTneeTMyhKk4WlJbMU9u/ne+7oaRheHxgdEFv1uk FGEWI4kIoGp9A1Ozz4qQd2tR3qjjvhlJ7b2WHDg6+7y9w4VDLKAUxpDUNYZs0f7RzeRP oWRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=U7jG+Redj2n/Yx20QRQJyqa8ihVuhx36qawgSQPthFA=; b=GuwAAYR4OU6kkjcvTmk9TUgslCKUiaNIpBMiXhWXNL30Kj60fW505lTIJJaGppHoow pOye7FrALC9F/CApRoFmAIHY22ZHhINMBcljTo1WuPUEAR9WMxUm9K0fQgK+NntoUS3J ybhbTKKFRi0rDnA1NWJvPuo0GWuy+Cz564p7GZGOWxi24/M97f2j6WfU35tncq05zFqq +A1jO62iPETOQdRLI4N5DuClccrEVfHdr7KKO0ZuXVeIZbsS1hAOVDkfTa9EIGzZZbRP VK6r74JcnCxOyrPRzxuUnGM/aVwSl2YSyX8gJpuFTEWqElEv2lO2GCMcS0ixHTHH8Tq3 vv/g== X-Gm-Message-State: ANhLgQ0GXiQmDOBrsW8lBGB14ux+6+gLokyyyo+Abtcf7dUpK9S2IeUL ArLxoUt1yFOOo0VorW+rUXc= X-Google-Smtp-Source: ADFU+vt1x3FC3Hfo9KtpzYUNKyBS+UzrdYFDy9SSv0OxjI7uCe0zK9H0XWhKsIEJtGjgDTIOGtK5yg== X-Received: by 2002:a63:1404:: with SMTP id u4mr20171266pgl.172.1584942751981; Sun, 22 Mar 2020 22:52:31 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id y30sm12563058pff.67.2020.03.22.22.52.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 22 Mar 2020 22:52:31 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Hugh Dickins , Minchan Kim , Vlastimil Babka , Mel Gorman , kernel-team@lge.com, Joonsoo Kim Subject: [PATCH v4 1/8] mm/vmscan: make active/inactive ratio as 1:1 for anon lru Date: Mon, 23 Mar 2020 14:52:05 +0900 Message-Id: <1584942732-2184-2-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1584942732-2184-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1584942732-2184-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim Current implementation of LRU management for anonymous page has some problems. Most important one is that it doesn't protect the workingset, that is, pages on the active LRU list. Although, this problem will be fixed in the following patchset, the preparation is required and this patch does it. What following patchset does is to restore workingset protection. In this case, newly created or swap-in pages are started their lifetime on the inactive list. If inactive list is too small, there is not enough chance to be referenced and the page cannot become the workingset. In order to provide enough chance to the newly anonymous pages, this patch makes active/inactive LRU ratio as 1:1. Acked-by: Johannes Weiner Signed-off-by: Joonsoo Kim --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 572fb17..e772f3f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2217,7 +2217,7 @@ static bool inactive_is_low(struct lruvec *lruvec, enum lru_list inactive_lru) active = lruvec_page_state(lruvec, NR_LRU_BASE + active_lru); gb = (inactive + active) >> (30 - PAGE_SHIFT); - if (gb) + if (gb && is_file_lru(inactive_lru)) inactive_ratio = int_sqrt(10 * gb); else inactive_ratio = 1; From patchwork Mon Mar 23 05:52:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11452377 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CCB5C92A for ; Mon, 23 Mar 2020 05:52:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8BE1020714 for ; Mon, 23 Mar 2020 05:52:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="d9fAE/5/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8BE1020714 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A469C6B0007; Mon, 23 Mar 2020 01:52:37 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9F6CE6B0008; Mon, 23 Mar 2020 01:52:37 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8E5CA6B000A; Mon, 23 Mar 2020 01:52:37 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0001.hostedemail.com [216.40.44.1]) by kanga.kvack.org (Postfix) with ESMTP id 736AD6B0007 for ; Mon, 23 Mar 2020 01:52:37 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A9C0E181AC9BF for ; Mon, 23 Mar 2020 05:52:37 +0000 (UTC) X-FDA: 76625557554.27.key56_6e3606a426351 X-Spam-Summary: 2,0,0,c81ec1fe90ab6e45,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:1:41:355:379:541:800:960:966:973:988:989:1260:1345:1359:1437:1605:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:2637:2693:2901:2904:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3873:3874:4250:4321:4385:4605:5007:6119:6261:6653:7576:7903:8603:8660:8957:9010:9413:9707:10004:11026:11232:11473:11657:11658:11914:12043:12050:12296:12297:12438:12517:12519:12555:12679:12895:12986:13148:13153:13161:13172:13180:13228:13229:13230:14096:14394:21080:21433:21444:21451:21627:21666:21740:21990:30003:30034:30054:30079,0,RBL:209.85.215.196:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: key56_6e3606a426351 X-Filterd-Recvd-Size: 14254 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Mon, 23 Mar 2020 05:52:37 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id a32so6644219pga.4 for ; Sun, 22 Mar 2020 22:52:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/SlUjTtocf8V+J3dKBKuBijXBEV2naB5SratdivUvVI=; b=d9fAE/5/i+dc7/3phOjE1iYfCR7OP3bX6oxpfS5O2KGCff3+fCR9iEqkGuwSoBNODR wN4f5xioriSkxQSyKFo8UOo0mydksvXO8mD9oVz35TxqSXkd3aodmXOhLQeqcr9WCCCB NN4DhkxJR/xDATuX2Z4kXRDuoyNJuQoRfQgMGwK5KVlbKCyio/OsJKJ8ZxqV8Wu7Xcgd Nai0iockbJvfP3ir2WFY5Ebc/KvyEwrENixfrqM+ZcQWTZAH6A+DajDT69WK35jKj0YE xUFnUYAlKlV57sQH/no/vKeXxmpvBCX4Sc7aUSZXT4F/nwTJFKtiUlrhjXTtzqwPo4ih A7ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/SlUjTtocf8V+J3dKBKuBijXBEV2naB5SratdivUvVI=; b=EqoPTnfzKp4hKQNak399GHZ/4/HCZkw69pXjiJAQ1K10xD+XZ5AApkV3tfjyenfwlr tMz3NfS/2Pa8jUmVua7euyLQD3QCU//loPlSrkJR0rA1yxFAsuZdctuuwF6mEOKcEDXw elMnJNMRqdstwEM43WdRK6ki6P/beDuwTafxp+xA3T1dGnxbfidaxnMW3rzmLIYG3DP8 l6MoqC/2HHBNd+zSOseOVi9xOwlPEPo/K4ZMu1LsE7KsCtfA9Nr5fqdzmX+2IFPQvbRV WlDLzXxqyln//iglvpQR3dy5EglqmH71bDgZDyOn6QKxwPNhc588jeOrKKTV4O48wiqW KiTQ== X-Gm-Message-State: ANhLgQ0imDkaA5K6NW2iwVove2+zMJ6MwPdMoGf/sZHKNcPbqYcxQjvc wUXpnWlm9Yh2PfWNDTiTd5A= X-Google-Smtp-Source: ADFU+vs9ghT9Rh1yf7517f2w89WeZdvlwSmplpAYrrJYr6sN5IMKf3AgAJq+3px1JX+TchRofpVohw== X-Received: by 2002:a63:6c8a:: with SMTP id h132mr20717130pgc.42.1584942755714; Sun, 22 Mar 2020 22:52:35 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id y30sm12563058pff.67.2020.03.22.22.52.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 22 Mar 2020 22:52:35 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Hugh Dickins , Minchan Kim , Vlastimil Babka , Mel Gorman , kernel-team@lge.com, Joonsoo Kim Subject: [PATCH v4 2/8] mm/vmscan: protect the workingset on anonymous LRU Date: Mon, 23 Mar 2020 14:52:06 +0900 Message-Id: <1584942732-2184-3-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1584942732-2184-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1584942732-2184-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim In current implementation, newly created or swap-in anonymous page is started on active list. Growing active list results in rebalancing active/inactive list so old pages on active list are demoted to inactive list. Hence, the page on active list isn't protected at all. Following is an example of this situation. Assume that 50 hot pages on active list. Numbers denote the number of pages on active/inactive list (active | inactive). 1. 50 hot pages on active list 50(h) | 0 2. workload: 50 newly created (used-once) pages 50(uo) | 50(h) 3. workload: another 50 newly created (used-once) pages 50(uo) | 50(uo), swap-out 50(h) This patch tries to fix this issue. Like as file LRU, newly created or swap-in anonymous pages will be inserted to the inactive list. They are promoted to active list if enough reference happens. This simple modification changes the above example as following. 1. 50 hot pages on active list 50(h) | 0 2. workload: 50 newly created (used-once) pages 50(h) | 50(uo) 3. workload: another 50 newly created (used-once) pages 50(h) | 50(uo), swap-out 50(uo) As you can see, hot pages on active list would be protected. Note that, this implementation has a drawback that the page cannot be promoted and will be swapped-out if re-access interval is greater than the size of inactive list but less than the size of total(active+inactive). To solve this potential issue, following patch will apply workingset detection that is applied to file LRU some day before. Acked-by: Johannes Weiner Signed-off-by: Joonsoo Kim --- include/linux/swap.h | 2 +- kernel/events/uprobes.c | 2 +- mm/huge_memory.c | 6 +++--- mm/khugepaged.c | 2 +- mm/memory.c | 9 ++++----- mm/migrate.c | 2 +- mm/swap.c | 13 +++++++------ mm/swapfile.c | 2 +- mm/userfaultfd.c | 2 +- mm/vmscan.c | 4 +--- 10 files changed, 21 insertions(+), 23 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 1e99f7a..954e13e 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -344,7 +344,7 @@ extern void deactivate_page(struct page *page); extern void mark_page_lazyfree(struct page *page); extern void swap_setup(void); -extern void lru_cache_add_active_or_unevictable(struct page *page, +extern void lru_cache_add_inactive_or_unevictable(struct page *page, struct vm_area_struct *vma); /* linux/mm/vmscan.c */ diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index ece7e13..14156fc 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -190,7 +190,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, get_page(new_page); page_add_new_anon_rmap(new_page, vma, addr, false); mem_cgroup_commit_charge(new_page, memcg, false, false); - lru_cache_add_active_or_unevictable(new_page, vma); + lru_cache_add_inactive_or_unevictable(new_page, vma); } else /* no new page, just dec_mm_counter for old_page */ dec_mm_counter(mm, MM_ANONPAGES); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a880932..6356dfd 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -638,7 +638,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); page_add_new_anon_rmap(page, vma, haddr, true); mem_cgroup_commit_charge(page, memcg, false, true); - lru_cache_add_active_or_unevictable(page, vma); + lru_cache_add_inactive_or_unevictable(page, vma); pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry); add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR); @@ -1282,7 +1282,7 @@ static vm_fault_t do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, set_page_private(pages[i], 0); page_add_new_anon_rmap(pages[i], vmf->vma, haddr, false); mem_cgroup_commit_charge(pages[i], memcg, false, false); - lru_cache_add_active_or_unevictable(pages[i], vma); + lru_cache_add_inactive_or_unevictable(pages[i], vma); vmf->pte = pte_offset_map(&_pmd, haddr); VM_BUG_ON(!pte_none(*vmf->pte)); set_pte_at(vma->vm_mm, haddr, vmf->pte, entry); @@ -1435,7 +1435,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) pmdp_huge_clear_flush_notify(vma, haddr, vmf->pmd); page_add_new_anon_rmap(new_page, vma, haddr, true); mem_cgroup_commit_charge(new_page, memcg, false, true); - lru_cache_add_active_or_unevictable(new_page, vma); + lru_cache_add_inactive_or_unevictable(new_page, vma); set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry); update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); if (!page) { diff --git a/mm/khugepaged.c b/mm/khugepaged.c index b679908..246c155 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1092,7 +1092,7 @@ static void collapse_huge_page(struct mm_struct *mm, page_add_new_anon_rmap(new_page, vma, address, true); mem_cgroup_commit_charge(new_page, memcg, false, true); count_memcg_events(memcg, THP_COLLAPSE_ALLOC, 1); - lru_cache_add_active_or_unevictable(new_page, vma); + lru_cache_add_inactive_or_unevictable(new_page, vma); pgtable_trans_huge_deposit(mm, pmd, pgtable); set_pmd_at(mm, address, pmd, _pmd); update_mmu_cache_pmd(vma, address, pmd); diff --git a/mm/memory.c b/mm/memory.c index 45442d9..5f7813a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2513,7 +2513,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) ptep_clear_flush_notify(vma, vmf->address, vmf->pte); page_add_new_anon_rmap(new_page, vma, vmf->address, false); mem_cgroup_commit_charge(new_page, memcg, false, false); - lru_cache_add_active_or_unevictable(new_page, vma); + lru_cache_add_inactive_or_unevictable(new_page, vma); /* * We call the notify macro here because, when using secondary * mmu page tables (such as kvm shadow page tables), we want the @@ -3038,11 +3038,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (unlikely(page != swapcache && swapcache)) { page_add_new_anon_rmap(page, vma, vmf->address, false); mem_cgroup_commit_charge(page, memcg, false, false); - lru_cache_add_active_or_unevictable(page, vma); + lru_cache_add_inactive_or_unevictable(page, vma); } else { do_page_add_anon_rmap(page, vma, vmf->address, exclusive); mem_cgroup_commit_charge(page, memcg, true, false); - activate_page(page); } swap_free(entry); @@ -3186,7 +3185,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); page_add_new_anon_rmap(page, vma, vmf->address, false); mem_cgroup_commit_charge(page, memcg, false, false); - lru_cache_add_active_or_unevictable(page, vma); + lru_cache_add_inactive_or_unevictable(page, vma); setpte: set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry); @@ -3449,7 +3448,7 @@ vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg, inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); page_add_new_anon_rmap(page, vma, vmf->address, false); mem_cgroup_commit_charge(page, memcg, false, false); - lru_cache_add_active_or_unevictable(page, vma); + lru_cache_add_inactive_or_unevictable(page, vma); } else { inc_mm_counter_fast(vma->vm_mm, mm_counter_file(page)); page_add_file_rmap(page, false); diff --git a/mm/migrate.c b/mm/migrate.c index 86873b6..ef034c0 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2784,7 +2784,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, page_add_new_anon_rmap(page, vma, addr, false); mem_cgroup_commit_charge(page, memcg, false, false); if (!is_zone_device_page(page)) - lru_cache_add_active_or_unevictable(page, vma); + lru_cache_add_inactive_or_unevictable(page, vma); get_page(page); if (flush) { diff --git a/mm/swap.c b/mm/swap.c index 5341ae9..442d27e 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -448,23 +448,24 @@ void lru_cache_add(struct page *page) } /** - * lru_cache_add_active_or_unevictable + * lru_cache_add_inactive_or_unevictable * @page: the page to be added to LRU * @vma: vma in which page is mapped for determining reclaimability * - * Place @page on the active or unevictable LRU list, depending on its + * Place @page on the inactive or unevictable LRU list, depending on its * evictability. Note that if the page is not evictable, it goes * directly back onto it's zone's unevictable list, it does NOT use a * per cpu pagevec. */ -void lru_cache_add_active_or_unevictable(struct page *page, +void lru_cache_add_inactive_or_unevictable(struct page *page, struct vm_area_struct *vma) { + bool unevictable; + VM_BUG_ON_PAGE(PageLRU(page), page); - if (likely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED)) - SetPageActive(page); - else if (!TestSetPageMlocked(page)) { + unevictable = (vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) == VM_LOCKED; + if (unevictable && !TestSetPageMlocked(page)) { /* * We use the irq-unsafe __mod_zone_page_stat because this * counter is not modified from interrupt context, and the pte diff --git a/mm/swapfile.c b/mm/swapfile.c index bb3261d..6bdcbf9 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1888,7 +1888,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, } else { /* ksm created a completely new copy */ page_add_new_anon_rmap(page, vma, addr, false); mem_cgroup_commit_charge(page, memcg, false, false); - lru_cache_add_active_or_unevictable(page, vma); + lru_cache_add_inactive_or_unevictable(page, vma); } swap_free(entry); /* diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 1b0d7ab..875e329 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -120,7 +120,7 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm, inc_mm_counter(dst_mm, MM_ANONPAGES); page_add_new_anon_rmap(page, dst_vma, dst_addr, false); mem_cgroup_commit_charge(page, memcg, false, false); - lru_cache_add_active_or_unevictable(page, dst_vma); + lru_cache_add_inactive_or_unevictable(page, dst_vma); set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); diff --git a/mm/vmscan.c b/mm/vmscan.c index e772f3f..c932141 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1010,8 +1010,6 @@ static enum page_references page_check_references(struct page *page, return PAGEREF_RECLAIM; if (referenced_ptes) { - if (PageSwapBacked(page)) - return PAGEREF_ACTIVATE; /* * All mapped pages start out with page table * references from the instantiating fault, so we need @@ -1034,7 +1032,7 @@ static enum page_references page_check_references(struct page *page, /* * Activate file-backed executable pages after first usage. */ - if (vm_flags & VM_EXEC) + if ((vm_flags & VM_EXEC) && !PageSwapBacked(page)) return PAGEREF_ACTIVATE; return PAGEREF_KEEP; From patchwork Mon Mar 23 05:52:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11452379 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9FAA41731 for ; Mon, 23 Mar 2020 05:52:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 517DE20719 for ; Mon, 23 Mar 2020 05:52:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="TOLsu0qz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 517DE20719 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 53CD46B0008; Mon, 23 Mar 2020 01:52:41 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4EDB56B000A; Mon, 23 Mar 2020 01:52:41 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3DBAA8E0003; Mon, 23 Mar 2020 01:52:41 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0160.hostedemail.com [216.40.44.160]) by kanga.kvack.org (Postfix) with ESMTP id 243536B0008 for ; Mon, 23 Mar 2020 01:52:41 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 5E81A8248047 for ; Mon, 23 Mar 2020 05:52:41 +0000 (UTC) X-FDA: 76625557722.08.snow04_6ebc78738e753 X-Spam-Summary: 2,0,0,3aa68853bd7efd90,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:1:2:41:69:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1605:1730:1747:1777:1792:2198:2199:2393:2559:2562:2693:2914:3138:3139:3140:3141:3142:3308:3865:3866:3867:3868:3870:3871:3872:3874:4052:4250:4321:4605:5007:6261:6653:7576:8603:8957:9010:9413:9707:10004:11026:11232:11233:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12663:12679:12683:12895:12986:14096:14394:21080:21325:21433:21444:21451:21627:21666:21740:21966:21990:30012:30045:30054:30056,0,RBL:209.85.210.193:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: snow04_6ebc78738e753 X-Filterd-Recvd-Size: 13079 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Mon, 23 Mar 2020 05:52:40 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id j10so6977245pfi.12 for ; Sun, 22 Mar 2020 22:52:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=AAqBr/rPrUqwnI509ShO7aOvDpEQOYlSzXcCUps9/2s=; b=TOLsu0qzmO3/RvXIwDscdgOzyWvb76KrP+AyJwf/FqH/MMhWw52aiMQUOwKa6veYA1 Rn95rQTxmA3Im8zjO2qbmMZFwtEvXI1kod4KH+KkU14lQs8ucFuaNlpafgsIz6GHUjX4 wQYX5lhjhLIVTke9rh8FgiprBGDBnWXgeneYoEgOXfHnm1qJWfIzp5zUXTrr+DOjTyss jJracu/ucBH/ZulReQJCn9zkobsrRZemS2hhAii92s/oBmuFzJ2iQ3pGwWwK9WeoCE2/ cMJaBk7c/OxJhl89p2Il6W0XKj+OkXgUaWNJJMvqYRWUFGOJQ+3PIMYg/4VD75WhWXvb /HNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=AAqBr/rPrUqwnI509ShO7aOvDpEQOYlSzXcCUps9/2s=; b=bmmBlxLI0l7vqn0j+mxki1JzjQfVU0uHnjBUdmXxEN+rKKSV0/zf03A2cLhAm6mJJw kVSfCs/B3IDl6mkP8qtQxvIdCQRb5dFTmGQ9dPpXmgTIqY27EuBdzCmka08udroxe63l IxhTCLY6KSDMYBh6T20fDKj98xP96SDI14cQwh93UcBo3D65g+oVjZ7OKwpMmDIyurQs 2DrPDQzV0j2r4bC5jdPRIcHSqqWwMBA8hywxTWv/8nY7wWMB2/NoyDLDavNI8USQI5M9 8PbB7NsgWoTQ1PlPevA5IvKQ+gLVYMY4UIyZOO1WUSv2DsKfZU9EAs2o1CxHTbLM89oz bpEg== X-Gm-Message-State: ANhLgQ0vSLuCfLfsuIta01yhsRllKPR5mnuezz/UfGLVNWEOXlNrmZYU pubcIHJyGpFi7384ShSUrZY= X-Google-Smtp-Source: ADFU+vu6bzfDIPUbv7UyMaxrR6DfuK/8+i7fDwUS5rMCNgxi0Ifj/SUHjtCL1X/tuVtjIAyMHlI1hA== X-Received: by 2002:aa7:9e46:: with SMTP id z6mr23163768pfq.17.1584942759245; Sun, 22 Mar 2020 22:52:39 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id y30sm12563058pff.67.2020.03.22.22.52.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 22 Mar 2020 22:52:38 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Hugh Dickins , Minchan Kim , Vlastimil Babka , Mel Gorman , kernel-team@lge.com, Joonsoo Kim Subject: [PATCH v4 3/8] mm/workingset: extend the workingset detection for anon LRU Date: Mon, 23 Mar 2020 14:52:07 +0900 Message-Id: <1584942732-2184-4-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1584942732-2184-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1584942732-2184-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim In the following patch, workingset detection will be applied to anonymous LRU. To prepare it, this patch adds some code to distinguish/handle the both LRUs. Acked-by: Johannes Weiner Signed-off-by: Joonsoo Kim --- include/linux/mmzone.h | 14 +++++++++----- mm/memcontrol.c | 12 ++++++++---- mm/vmscan.c | 15 ++++++++++----- mm/vmstat.c | 6 ++++-- mm/workingset.c | 33 ++++++++++++++++++++------------- 5 files changed, 51 insertions(+), 29 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 5334ad8..ad0639f 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -220,8 +220,12 @@ enum node_stat_item { NR_ISOLATED_ANON, /* Temporary isolated pages from anon lru */ NR_ISOLATED_FILE, /* Temporary isolated pages from file lru */ WORKINGSET_NODES, - WORKINGSET_REFAULT, - WORKINGSET_ACTIVATE, + WORKINGSET_REFAULT_BASE, + WORKINGSET_REFAULT_ANON = WORKINGSET_REFAULT_BASE, + WORKINGSET_REFAULT_FILE, + WORKINGSET_ACTIVATE_BASE, + WORKINGSET_ACTIVATE_ANON = WORKINGSET_ACTIVATE_BASE, + WORKINGSET_ACTIVATE_FILE, WORKINGSET_RESTORE, WORKINGSET_NODERECLAIM, NR_ANON_MAPPED, /* Mapped anonymous pages */ @@ -304,10 +308,10 @@ enum lruvec_flags { struct lruvec { struct list_head lists[NR_LRU_LISTS]; struct zone_reclaim_stat reclaim_stat; - /* Evictions & activations on the inactive file list */ - atomic_long_t inactive_age; + /* Evictions & activations on the inactive list, anon=0, file=1 */ + atomic_long_t inactive_age[2]; /* Refaults at the time of last reclaim cycle */ - unsigned long refaults; + unsigned long refaults[2]; /* Various lruvec state flags (enum lruvec_flags) */ unsigned long flags; #ifdef CONFIG_MEMCG diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 6c83cf4..8f4473d 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1431,10 +1431,14 @@ static char *memory_stat_format(struct mem_cgroup *memcg) seq_buf_printf(&s, "%s %lu\n", vm_event_name(PGMAJFAULT), memcg_events(memcg, PGMAJFAULT)); - seq_buf_printf(&s, "workingset_refault %lu\n", - memcg_page_state(memcg, WORKINGSET_REFAULT)); - seq_buf_printf(&s, "workingset_activate %lu\n", - memcg_page_state(memcg, WORKINGSET_ACTIVATE)); + seq_buf_printf(&s, "workingset_refault_anon %lu\n", + memcg_page_state(memcg, WORKINGSET_REFAULT_ANON)); + seq_buf_printf(&s, "workingset_refault_file %lu\n", + memcg_page_state(memcg, WORKINGSET_REFAULT_FILE)); + seq_buf_printf(&s, "workingset_activate_anon %lu\n", + memcg_page_state(memcg, WORKINGSET_ACTIVATE_ANON)); + seq_buf_printf(&s, "workingset_activate_file %lu\n", + memcg_page_state(memcg, WORKINGSET_ACTIVATE_FILE)); seq_buf_printf(&s, "workingset_nodereclaim %lu\n", memcg_page_state(memcg, WORKINGSET_NODERECLAIM)); diff --git a/mm/vmscan.c b/mm/vmscan.c index c932141..0493c25 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2716,7 +2716,10 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) if (!sc->force_deactivate) { unsigned long refaults; - if (inactive_is_low(target_lruvec, LRU_INACTIVE_ANON)) + refaults = lruvec_page_state(target_lruvec, + WORKINGSET_ACTIVATE_ANON); + if (refaults != target_lruvec->refaults[0] || + inactive_is_low(target_lruvec, LRU_INACTIVE_ANON)) sc->may_deactivate |= DEACTIVATE_ANON; else sc->may_deactivate &= ~DEACTIVATE_ANON; @@ -2727,8 +2730,8 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) * rid of any stale active pages quickly. */ refaults = lruvec_page_state(target_lruvec, - WORKINGSET_ACTIVATE); - if (refaults != target_lruvec->refaults || + WORKINGSET_ACTIVATE_FILE); + if (refaults != target_lruvec->refaults[1] || inactive_is_low(target_lruvec, LRU_INACTIVE_FILE)) sc->may_deactivate |= DEACTIVATE_FILE; else @@ -3007,8 +3010,10 @@ static void snapshot_refaults(struct mem_cgroup *target_memcg, pg_data_t *pgdat) unsigned long refaults; target_lruvec = mem_cgroup_lruvec(target_memcg, pgdat); - refaults = lruvec_page_state(target_lruvec, WORKINGSET_ACTIVATE); - target_lruvec->refaults = refaults; + refaults = lruvec_page_state(target_lruvec, WORKINGSET_ACTIVATE_ANON); + target_lruvec->refaults[0] = refaults; + refaults = lruvec_page_state(target_lruvec, WORKINGSET_ACTIVATE_FILE); + target_lruvec->refaults[1] = refaults; } /* diff --git a/mm/vmstat.c b/mm/vmstat.c index 78d5337..3cdf8e9 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1146,8 +1146,10 @@ const char * const vmstat_text[] = { "nr_isolated_anon", "nr_isolated_file", "workingset_nodes", - "workingset_refault", - "workingset_activate", + "workingset_refault_anon", + "workingset_refault_file", + "workingset_activate_anon", + "workingset_activate_file", "workingset_restore", "workingset_nodereclaim", "nr_anon_pages", diff --git a/mm/workingset.c b/mm/workingset.c index 474186b..59415e0 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -6,6 +6,7 @@ */ #include +#include #include #include #include @@ -156,7 +157,7 @@ * * Implementation * - * For each node's file LRU lists, a counter for inactive evictions + * For each node's anon/file LRU lists, a counter for inactive evictions * and activations is maintained (node->inactive_age). * * On eviction, a snapshot of this counter (along with some bits to @@ -213,7 +214,8 @@ static void unpack_shadow(void *shadow, int *memcgidp, pg_data_t **pgdat, *workingsetp = workingset; } -static void advance_inactive_age(struct mem_cgroup *memcg, pg_data_t *pgdat) +static void advance_inactive_age(struct mem_cgroup *memcg, pg_data_t *pgdat, + bool file) { /* * Reclaiming a cgroup means reclaiming all its children in a @@ -230,7 +232,7 @@ static void advance_inactive_age(struct mem_cgroup *memcg, pg_data_t *pgdat) struct lruvec *lruvec; lruvec = mem_cgroup_lruvec(memcg, pgdat); - atomic_long_inc(&lruvec->inactive_age); + atomic_long_inc(&lruvec->inactive_age[file]); } while (memcg && (memcg = parent_mem_cgroup(memcg))); } @@ -245,6 +247,7 @@ static void advance_inactive_age(struct mem_cgroup *memcg, pg_data_t *pgdat) void *workingset_eviction(struct page *page, struct mem_cgroup *target_memcg) { struct pglist_data *pgdat = page_pgdat(page); + bool file = page_is_file_cache(page); unsigned long eviction; struct lruvec *lruvec; int memcgid; @@ -254,12 +257,12 @@ void *workingset_eviction(struct page *page, struct mem_cgroup *target_memcg) VM_BUG_ON_PAGE(page_count(page), page); VM_BUG_ON_PAGE(!PageLocked(page), page); - advance_inactive_age(page_memcg(page), pgdat); + advance_inactive_age(page_memcg(page), pgdat, file); lruvec = mem_cgroup_lruvec(target_memcg, pgdat); /* XXX: target_memcg can be NULL, go through lruvec */ memcgid = mem_cgroup_id(lruvec_memcg(lruvec)); - eviction = atomic_long_read(&lruvec->inactive_age); + eviction = atomic_long_read(&lruvec->inactive_age[file]); return pack_shadow(memcgid, pgdat, eviction, PageWorkingset(page)); } @@ -274,15 +277,16 @@ void *workingset_eviction(struct page *page, struct mem_cgroup *target_memcg) */ void workingset_refault(struct page *page, void *shadow) { + bool file = page_is_file_cache(page); struct mem_cgroup *eviction_memcg; struct lruvec *eviction_lruvec; unsigned long refault_distance; struct pglist_data *pgdat; - unsigned long active_file; struct mem_cgroup *memcg; unsigned long eviction; struct lruvec *lruvec; unsigned long refault; + unsigned long active; bool workingset; int memcgid; @@ -308,9 +312,11 @@ void workingset_refault(struct page *page, void *shadow) eviction_memcg = mem_cgroup_from_id(memcgid); if (!mem_cgroup_disabled() && !eviction_memcg) goto out; + eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat); - refault = atomic_long_read(&eviction_lruvec->inactive_age); - active_file = lruvec_page_state(eviction_lruvec, NR_ACTIVE_FILE); + refault = atomic_long_read(&eviction_lruvec->inactive_age[file]); + active = lruvec_page_state(eviction_lruvec, + page_lru_base_type(page) + LRU_ACTIVE); /* * Calculate the refault distance @@ -341,19 +347,19 @@ void workingset_refault(struct page *page, void *shadow) memcg = page_memcg(page); lruvec = mem_cgroup_lruvec(memcg, pgdat); - inc_lruvec_state(lruvec, WORKINGSET_REFAULT); + inc_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file); /* * Compare the distance to the existing workingset size. We * don't act on pages that couldn't stay resident even if all * the memory was available to the page cache. */ - if (refault_distance > active_file) + if (refault_distance > active) goto out; SetPageActive(page); - advance_inactive_age(memcg, pgdat); - inc_lruvec_state(lruvec, WORKINGSET_ACTIVATE); + advance_inactive_age(memcg, pgdat, file); + inc_lruvec_state(lruvec, WORKINGSET_ACTIVATE_BASE + file); /* Page was active prior to eviction */ if (workingset) { @@ -370,6 +376,7 @@ void workingset_refault(struct page *page, void *shadow) */ void workingset_activation(struct page *page) { + bool file = page_is_file_cache(page); struct mem_cgroup *memcg; rcu_read_lock(); @@ -383,7 +390,7 @@ void workingset_activation(struct page *page) memcg = page_memcg_rcu(page); if (!mem_cgroup_disabled() && !memcg) goto out; - advance_inactive_age(memcg, page_pgdat(page)); + advance_inactive_age(memcg, page_pgdat(page), file); out: rcu_read_unlock(); } From patchwork Mon Mar 23 05:52:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11452381 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 309051731 for ; Mon, 23 Mar 2020 05:52:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E479120714 for ; Mon, 23 Mar 2020 05:52:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="RoK9TGay" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E479120714 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D52CD6B000A; Mon, 23 Mar 2020 01:52:44 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D044E8E0003; Mon, 23 Mar 2020 01:52:44 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C1AF46B000D; Mon, 23 Mar 2020 01:52:44 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0167.hostedemail.com [216.40.44.167]) by kanga.kvack.org (Postfix) with ESMTP id AA70F6B000A for ; Mon, 23 Mar 2020 01:52:44 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id DEF7B180AD811 for ; Mon, 23 Mar 2020 05:52:44 +0000 (UTC) X-FDA: 76625557848.27.whip40_6f420ab8af028 X-Spam-Summary: 2,0,0,053c2b5955dfb100,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:2:41:355:379:541:800:960:966:968:973:988:989:1260:1345:1359:1437:1535:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2553:2559:2562:2693:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4049:4120:4321:4385:4605:5007:6261:6653:7576:7903:8957:9413:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12679:12895:12986:13255:14096:14394:21080:21444:21451:21627:21666:21740:21990:30054:30070:30075:30090,0,RBL:209.85.214.195:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: whip40_6f420ab8af028 X-Filterd-Recvd-Size: 9764 Received: from mail-pl1-f195.google.com (mail-pl1-f195.google.com [209.85.214.195]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Mon, 23 Mar 2020 05:52:44 +0000 (UTC) Received: by mail-pl1-f195.google.com with SMTP id t16so5459464plr.8 for ; Sun, 22 Mar 2020 22:52:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=8BKTKeS86EzAAVAM7E7CqauMkIwqwO+JtvB/TRaImBU=; b=RoK9TGaySXHN1OlYzFQBROE5RbaQWaFK2mNV/EaBD4a5/kfEW/c7qJyIkkUuWWsxhh P7Xdetk+EgpULzPhTk4qxQkgzm9GbAKOf9OnOFSaLv9G4h0mnNWHYIB1O9meSHUZybol H2iZgraxKYQ451dlIIWdGTDJXQJYcsP6IAf6mP6t8v7t/TTcCMnh7R5jD9tOO52JuQ5W tBKnItUR8Os5uEWS9AKaKv/hQ1XyDJ9shXNtmIQtjdG2c3TA0ooqTOETR+FGT5g++9jY +v+S0ZmpqTwT/ZXBbJXfgRL/ERQGnvi19/1OgL6rZGN5OUidSOFretMddCm2GUPHb3+e 8tzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=8BKTKeS86EzAAVAM7E7CqauMkIwqwO+JtvB/TRaImBU=; b=oTueytzmVWFI45NuKpcHoaniL8tZJ93UufeNCs5UR4x2NvlRFS/RDy/Kl6Vl0rhsnj vdAmOydZXGn3RB+cedd9fq8FucwUYMTZ6ugVr/s+3DQHP230ph9WIVXOoonyNy3NxvVx VrMIfRHUCw3YdYF9zDYxB1zr5f7TXB0F9WIRPuDDmpzFt1D13Fq5e4A/i6dqff13u9BT FyggPU1s/RLzTxW5ha0mXq/azrqMryphQIKGUiNoB6YKjhVUwahVBb767Z+3a+pLUmoq eeBUcVZRPJJHQcpJVGMi7h4FVgY4EzkcukpfKprpSvxyYdCEc0vy/BuVP9Vy1febxNRx nQwA== X-Gm-Message-State: ANhLgQ2kH2tpRiWKamKSujwbu52jKTZ+36G4NUsNl/K1cCUzwq02Tx0K O2Ki59jL0K7X/yAxQAYADZQ= X-Google-Smtp-Source: ADFU+vvZLquJa8MhHFazV10Alh3WWus90SRxZ3uUpTHkON3UkFV9f9Ju1Y/jnbCYVMgb0iku1sML6w== X-Received: by 2002:a17:90b:3711:: with SMTP id mg17mr24045432pjb.73.1584942762728; Sun, 22 Mar 2020 22:52:42 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id y30sm12563058pff.67.2020.03.22.22.52.39 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 22 Mar 2020 22:52:42 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Hugh Dickins , Minchan Kim , Vlastimil Babka , Mel Gorman , kernel-team@lge.com, Joonsoo Kim Subject: [PATCH v4 4/8] mm/swapcache: support to handle the exceptional entries in swapcache Date: Mon, 23 Mar 2020 14:52:08 +0900 Message-Id: <1584942732-2184-5-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1584942732-2184-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1584942732-2184-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim Swapcache doesn't handle the exceptional entries since there is no case using it. In the following patch, workingset detection for anonymous page will be implemented and it stores the shadow entries as exceptional entries into the swapcache. So, we need to handle the exceptional entries and this patch implements it. Signed-off-by: Joonsoo Kim Acked-by: Johannes Weiner --- include/linux/swap.h | 10 ++++++---- mm/shmem.c | 3 ++- mm/swap_state.c | 26 ++++++++++++++++++++------ mm/vmscan.c | 2 +- 4 files changed, 29 insertions(+), 12 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 954e13e..273de48 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -408,9 +408,11 @@ extern struct address_space *swapper_spaces[]; extern unsigned long total_swapcache_pages(void); extern void show_swap_cache_info(void); extern int add_to_swap(struct page *page); -extern int add_to_swap_cache(struct page *, swp_entry_t, gfp_t); +extern int add_to_swap_cache(struct page *page, swp_entry_t entry, + gfp_t gfp, void **shadowp); extern int __add_to_swap_cache(struct page *page, swp_entry_t entry); -extern void __delete_from_swap_cache(struct page *, swp_entry_t entry); +extern void __delete_from_swap_cache(struct page *page, + swp_entry_t entry, void *shadow); extern void delete_from_swap_cache(struct page *); extern void free_page_and_swap_cache(struct page *); extern void free_pages_and_swap_cache(struct page **, int); @@ -565,13 +567,13 @@ static inline int add_to_swap(struct page *page) } static inline int add_to_swap_cache(struct page *page, swp_entry_t entry, - gfp_t gfp_mask) + gfp_t gfp_mask, void **shadowp) { return -1; } static inline void __delete_from_swap_cache(struct page *page, - swp_entry_t entry) + swp_entry_t entry, void *shadow) { } diff --git a/mm/shmem.c b/mm/shmem.c index 8793e8c..c6663ad 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1370,7 +1370,8 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) list_add(&info->swaplist, &shmem_swaplist); if (add_to_swap_cache(page, swap, - __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN) == 0) { + __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN, + NULL) == 0) { spin_lock_irq(&info->lock); shmem_recalc_inode(inode); info->swapped++; diff --git a/mm/swap_state.c b/mm/swap_state.c index 8e7ce9a..f06af84 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -111,12 +111,15 @@ void show_swap_cache_info(void) * add_to_swap_cache resembles add_to_page_cache_locked on swapper_space, * but sets SwapCache flag and private instead of mapping and index. */ -int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp) +int add_to_swap_cache(struct page *page, swp_entry_t entry, + gfp_t gfp, void **shadowp) { struct address_space *address_space = swap_address_space(entry); pgoff_t idx = swp_offset(entry); XA_STATE_ORDER(xas, &address_space->i_pages, idx, compound_order(page)); unsigned long i, nr = compound_nr(page); + unsigned long nrexceptional = 0; + void *old; VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(PageSwapCache(page), page); @@ -132,10 +135,17 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp) goto unlock; for (i = 0; i < nr; i++) { VM_BUG_ON_PAGE(xas.xa_index != idx + i, page); + old = xas_load(&xas); + if (xa_is_value(old)) { + nrexceptional++; + if (shadowp) + *shadowp = old; + } set_page_private(page + i, entry.val + i); xas_store(&xas, page); xas_next(&xas); } + address_space->nrexceptional -= nrexceptional; address_space->nrpages += nr; __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, nr); ADD_CACHE_INFO(add_total, nr); @@ -155,7 +165,8 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp) * This must be called only on pages that have * been verified to be in the swap cache. */ -void __delete_from_swap_cache(struct page *page, swp_entry_t entry) +void __delete_from_swap_cache(struct page *page, + swp_entry_t entry, void *shadow) { struct address_space *address_space = swap_address_space(entry); int i, nr = hpage_nr_pages(page); @@ -167,12 +178,14 @@ void __delete_from_swap_cache(struct page *page, swp_entry_t entry) VM_BUG_ON_PAGE(PageWriteback(page), page); for (i = 0; i < nr; i++) { - void *entry = xas_store(&xas, NULL); + void *entry = xas_store(&xas, shadow); VM_BUG_ON_PAGE(entry != page, entry); set_page_private(page + i, 0); xas_next(&xas); } ClearPageSwapCache(page); + if (shadow) + address_space->nrexceptional += nr; address_space->nrpages -= nr; __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, -nr); ADD_CACHE_INFO(del_total, nr); @@ -209,7 +222,7 @@ int add_to_swap(struct page *page) * Add it to the swap cache. */ err = add_to_swap_cache(page, entry, - __GFP_HIGH|__GFP_NOMEMALLOC|__GFP_NOWARN); + __GFP_HIGH|__GFP_NOMEMALLOC|__GFP_NOWARN, NULL); if (err) /* * add_to_swap_cache() doesn't return -EEXIST, so we can safely @@ -247,7 +260,7 @@ void delete_from_swap_cache(struct page *page) struct address_space *address_space = swap_address_space(entry); xa_lock_irq(&address_space->i_pages); - __delete_from_swap_cache(page, entry); + __delete_from_swap_cache(page, entry, NULL); xa_unlock_irq(&address_space->i_pages); put_swap_page(page, entry); @@ -418,7 +431,8 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, /* May fail (-ENOMEM) if XArray node allocation failed. */ __SetPageLocked(new_page); __SetPageSwapBacked(new_page); - err = add_to_swap_cache(new_page, entry, gfp_mask & GFP_KERNEL); + err = add_to_swap_cache(new_page, entry, + gfp_mask & GFP_KERNEL, NULL); if (likely(!err)) { /* Initiate read into locked page */ SetPageWorkingset(new_page); diff --git a/mm/vmscan.c b/mm/vmscan.c index 0493c25..9871861 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -909,7 +909,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page, if (PageSwapCache(page)) { swp_entry_t swap = { .val = page_private(page) }; mem_cgroup_swapout(page, swap); - __delete_from_swap_cache(page, swap); + __delete_from_swap_cache(page, swap, NULL); xa_unlock_irqrestore(&mapping->i_pages, flags); put_swap_page(page, swap); } else { From patchwork Mon Mar 23 05:52:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11452383 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 338C192A for ; Mon, 23 Mar 2020 05:52:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F417720714 for ; Mon, 23 Mar 2020 05:52:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="MhbjjMzH" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F417720714 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 081F46B000C; Mon, 23 Mar 2020 01:52:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 059E36B000D; Mon, 23 Mar 2020 01:52:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB0866B000E; Mon, 23 Mar 2020 01:52:47 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id D2DB76B000C for ; Mon, 23 Mar 2020 01:52:47 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 143A04DCB for ; Mon, 23 Mar 2020 05:52:48 +0000 (UTC) X-FDA: 76625558016.04.toes64_6fba838014633 X-Spam-Summary: 2,0,0,6972d6732a9312ca,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1534:1541:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3353:3865:3867:3868:3870:3871:3872:3874:4250:4321:5007:6261:6653:7576:7903:8957:9413:9707:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:13069:13161:13229:13311:13357:14096:14181:14384:14394:14721:21080:21444:21451:21627:21666:21740:21990:30012:30054,0,RBL:209.85.210.193:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:0,LUA_SUMMARY:none X-HE-Tag: toes64_6fba838014633 X-Filterd-Recvd-Size: 4608 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Mon, 23 Mar 2020 05:52:47 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id l184so6983071pfl.7 for ; Sun, 22 Mar 2020 22:52:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=j6iK115r4yAzPkHgkiyJxNjSgms6sufr4Xa73STF5NU=; b=MhbjjMzHYMVqut2cxjnRgSo4EzX/RmSpB+NTDaZJlIyJPLHdIGtwn2W90ekr7noqsy ZgvlnsLH/A/9OksZ+dM68Nkmsukei9TQVt24kt9/n9+P1qMjVAuagaUUgYzBEDuaX8pG 0E02S14KWAGYZfwuelei0KPOpyEfxT62v3XUzt7sCTqTf0K/kWLLCvGZE1C+M3LZ9Xj1 dkk9Kmwqd/Tz2gHENJqmU/Pa6AZkTWGQD3X+id7GF4Q1NCneOQYWSSEN3OBHkd4jkhkl PiahZHukZV9OWCv79YCX3h4FiFchwU3xh60ujYOsqE8G6cOQqAVygojyxChwtdyNWXGD aHjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=j6iK115r4yAzPkHgkiyJxNjSgms6sufr4Xa73STF5NU=; b=sjUTHbsp/5tEaO9AGSza55kpoZ5z/V2PJmpFE2P3yhMGmuvi8ip/3bs3+hQN9Nbq01 Nj40hQw9Xp3g7tPCwJm/ifdXC263g+X8ABMvSXIWgsmso/eG0yITDpXi9mqOWIcfNqzo n/cW/2IZYdH/JzrnDHFwMstJ8XvgT6nwIWtLtOWAZ3wXR6HqjssMDuT2zhcIqmyfwrYi L/e451/7Mo7NaHdJSTGD+XGrOjvPQ9mroc5c1Gz42Dl3OCAYoCT6VkOhclyfEibXPLY/ G1yvk8w91pEGW6HZrfQJhjS0TiZM4ZAo8bwZvLr8IPuDkbDszXF6yNDbqSysCH5U236q M0mQ== X-Gm-Message-State: ANhLgQ1q4T/EiRsChM+/kTdA1rCGvctIV3aAC/IOdymG/oTFY7bsvwEy S2AWWE3uoWTiLFoVs+IfU9Y= X-Google-Smtp-Source: ADFU+vuuqKs7zJRbxUVUMawYsBYRVOcpKxBqEI48FeeqPqBtqZ8stvfIg91WljIs4dIh07BgiEwzHA== X-Received: by 2002:a65:641a:: with SMTP id a26mr13873989pgv.247.1584942766280; Sun, 22 Mar 2020 22:52:46 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id y30sm12563058pff.67.2020.03.22.22.52.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 22 Mar 2020 22:52:45 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Hugh Dickins , Minchan Kim , Vlastimil Babka , Mel Gorman , kernel-team@lge.com, Joonsoo Kim Subject: [PATCH v4 5/8] mm/workingset: handle the page without memcg Date: Mon, 23 Mar 2020 14:52:09 +0900 Message-Id: <1584942732-2184-6-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1584942732-2184-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1584942732-2184-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim When implementing workingset detection for anonymous page, I found some swapcache pages with NULL memcg. They are brought in by swap readahead and nobody has touched it. The idea behind the workingset code is to tell on page fault time whether pages have been previously used or not. Since this page hasn't been used, don't store a shadow entry for it; when it later faults back in, we treat it as the new page that it is. Signed-off-by: Joonsoo Kim Acked-by: Johannes Weiner --- mm/workingset.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/mm/workingset.c b/mm/workingset.c index 59415e0..8b192e8 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -257,6 +257,19 @@ void *workingset_eviction(struct page *page, struct mem_cgroup *target_memcg) VM_BUG_ON_PAGE(page_count(page), page); VM_BUG_ON_PAGE(!PageLocked(page), page); + /* + * A page can be without a cgroup here when it was brought in by + * swap readahead and nobody has touched it since. + * + * The idea behind the workingset code is to tell on page fault + * time whether pages have been previously used or not. Since + * this page hasn't been used, don't store a shadow entry for it; + * when it later faults back in, we treat it as the new page + * that it is. + */ + if (!page_memcg(page)) + return NULL; + advance_inactive_age(page_memcg(page), pgdat, file); lruvec = mem_cgroup_lruvec(target_memcg, pgdat); From patchwork Mon Mar 23 05:52:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11452385 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E4F9B92A for ; Mon, 23 Mar 2020 05:52:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A820120714 for ; Mon, 23 Mar 2020 05:52:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="b0fCVVFF" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A820120714 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B08C66B000D; Mon, 23 Mar 2020 01:52:51 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ADFB16B000E; Mon, 23 Mar 2020 01:52:51 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9D0558E0003; Mon, 23 Mar 2020 01:52:51 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0165.hostedemail.com [216.40.44.165]) by kanga.kvack.org (Postfix) with ESMTP id 860DD6B000D for ; Mon, 23 Mar 2020 01:52:51 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id BE2728248047 for ; Mon, 23 Mar 2020 05:52:51 +0000 (UTC) X-FDA: 76625558142.07.girls62_70420511dd803 X-Spam-Summary: 2,0,0,99a4d90700136eb9,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:355:379:541:800:960:966:968:973:988:989:1260:1345:1359:1437:1535:1544:1711:1730:1747:1777:1792:2196:2199:2393:2553:2559:2562:3138:3139:3140:3141:3142:3353:3865:3866:3868:3871:4119:4321:4385:4605:5007:6261:6653:7576:7903:8957:9413:9707:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:12986:13161:13229:14181:14394:14721:21080:21324:21444:21451:21627:21666:21740:21990:30003:30054:30090,0,RBL:209.85.215.193:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: girls62_70420511dd803 X-Filterd-Recvd-Size: 8051 Received: from mail-pg1-f193.google.com (mail-pg1-f193.google.com [209.85.215.193]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Mon, 23 Mar 2020 05:52:51 +0000 (UTC) Received: by mail-pg1-f193.google.com with SMTP id x7so6647792pgh.5 for ; Sun, 22 Mar 2020 22:52:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=PA58IeGELp7GsoJt+UjQP9iD9gs1mWgVhDSyVyxp9eY=; b=b0fCVVFFvGUde3UoRxe5Hn/eOXfRswcrwmv5eQI0g7DBF3RqDkmM8/wMGHKhToO4BO VALj/+Np3QDwTxyxi5hpGau0EIq2dKJvTg5SSuxYvrV45nT12HKGTRQIiw7LFWI99AoH xoMN6RwQp9a9aGchGQ7kJDqDk+rM9GGLCFIzphcojz/HmIN3h6LD27NMLSX5o2I4uKZL qQK+1qAiQiEeFY6i+1uuQLJ4eoWuQeBpZpHg6ovlWoBqIcP0G4EzUFHvjw9eRwclzN7w +NUEwSQFa48scVZ7ZQZiyjlg53+viYmtgG1bEFcU2FaGG90iVaB0XE+62qexHT03Inv1 GBtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=PA58IeGELp7GsoJt+UjQP9iD9gs1mWgVhDSyVyxp9eY=; b=YbGEt6m6noxnlMPTKjL+zS70GW+t4Jk5qIaJXvKTfIARz0idF/aqVT4l/IAI4UT+jX cpSbxZcKOn+uN33K0cakPQD7bwJANPeDPqhByJuPPkKkOBjIUDvx8QLfy/2F1FOtwnhv fQtH/cEYTVkmHEFwJiE3pOIAHom4cy+y+I0VJc9qK+6Fyz3s7X1Cfj4dbZUkDWxZ3D2S 3HcgV/oiaWt7h0jXZRuMLZ4r1O73IXxHDukNfO/7x/My4zfLxHSIyd4e/OLBsWwkw0PZ e7KnTdzujJp2UIvTjbQSEYYndlD/rWMvDqRTDHJQ39CDFR90qyh7h7ge9qlwXwjnRsQM 0yvg== X-Gm-Message-State: ANhLgQ37QAZsi7Q8dKKsk/8yp+QUSuun6WB03Q3/qkgPbf+S3OPkTUXo if8TKObXYA9TwvF5omO0PjA3paT//DU= X-Google-Smtp-Source: ADFU+vuGliCxHPs66TbnySCtCClvxORBH+oMz63p8tvjnSznatR+XfNrRr0tDX/2ia3Rivrmbn7xOQ== X-Received: by 2002:a62:8247:: with SMTP id w68mr23302803pfd.146.1584942769827; Sun, 22 Mar 2020 22:52:49 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id y30sm12563058pff.67.2020.03.22.22.52.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 22 Mar 2020 22:52:49 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Hugh Dickins , Minchan Kim , Vlastimil Babka , Mel Gorman , kernel-team@lge.com, Joonsoo Kim Subject: [PATCH v4 6/8] mm/swap: implement workingset detection for anonymous LRU Date: Mon, 23 Mar 2020 14:52:10 +0900 Message-Id: <1584942732-2184-7-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1584942732-2184-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1584942732-2184-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim This patch implements workingset detection for anonymous LRU. All the infrastructure is implemented by the previous patches so this patch just activates the workingset detection by installing/retrieving the shadow entry. Signed-off-by: Joonsoo Kim --- include/linux/swap.h | 6 ++++++ mm/memory.c | 7 ++++++- mm/swap_state.c | 20 ++++++++++++++++++-- mm/vmscan.c | 7 +++++-- 4 files changed, 35 insertions(+), 5 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 273de48..fb4772e 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -408,6 +408,7 @@ extern struct address_space *swapper_spaces[]; extern unsigned long total_swapcache_pages(void); extern void show_swap_cache_info(void); extern int add_to_swap(struct page *page); +extern void *get_shadow_from_swap_cache(swp_entry_t entry); extern int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp, void **shadowp); extern int __add_to_swap_cache(struct page *page, swp_entry_t entry); @@ -566,6 +567,11 @@ static inline int add_to_swap(struct page *page) return 0; } +static inline void *get_shadow_from_swap_cache(swp_entry_t entry) +{ + return NULL; +} + static inline int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp_mask, void **shadowp) { diff --git a/mm/memory.c b/mm/memory.c index 5f7813a..91a2097 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2925,10 +2925,15 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, vmf->address); if (page) { + void *shadow; + __SetPageLocked(page); __SetPageSwapBacked(page); set_page_private(page, entry.val); - lru_cache_add_anon(page); + shadow = get_shadow_from_swap_cache(entry); + if (shadow) + workingset_refault(page, shadow); + lru_cache_add(page); swap_readpage(page, true); } } else { diff --git a/mm/swap_state.c b/mm/swap_state.c index f06af84..f996455 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -107,6 +107,18 @@ void show_swap_cache_info(void) printk("Total swap = %lukB\n", total_swap_pages << (PAGE_SHIFT - 10)); } +void *get_shadow_from_swap_cache(swp_entry_t entry) +{ + struct address_space *address_space = swap_address_space(entry); + pgoff_t idx = swp_offset(entry); + struct page *page; + + page = find_get_entry(address_space, idx); + if (xa_is_value(page)) + return page; + return NULL; +} + /* * add_to_swap_cache resembles add_to_page_cache_locked on swapper_space, * but sets SwapCache flag and private instead of mapping and index. @@ -376,6 +388,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct page *found_page = NULL, *new_page = NULL; struct swap_info_struct *si; int err; + void *shadow; *new_page_allocated = false; do { @@ -431,12 +444,15 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, /* May fail (-ENOMEM) if XArray node allocation failed. */ __SetPageLocked(new_page); __SetPageSwapBacked(new_page); + shadow = NULL; err = add_to_swap_cache(new_page, entry, - gfp_mask & GFP_KERNEL, NULL); + gfp_mask & GFP_KERNEL, &shadow); if (likely(!err)) { /* Initiate read into locked page */ SetPageWorkingset(new_page); - lru_cache_add_anon(new_page); + if (shadow) + workingset_refault(new_page, shadow); + lru_cache_add(new_page); *new_page_allocated = true; return new_page; } diff --git a/mm/vmscan.c b/mm/vmscan.c index 9871861..b37cc26 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -867,6 +867,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page, { unsigned long flags; int refcount; + void *shadow = NULL; BUG_ON(!PageLocked(page)); BUG_ON(mapping != page_mapping(page)); @@ -909,12 +910,13 @@ static int __remove_mapping(struct address_space *mapping, struct page *page, if (PageSwapCache(page)) { swp_entry_t swap = { .val = page_private(page) }; mem_cgroup_swapout(page, swap); - __delete_from_swap_cache(page, swap, NULL); + if (reclaimed && !mapping_exiting(mapping)) + shadow = workingset_eviction(page, target_memcg); + __delete_from_swap_cache(page, swap, shadow); xa_unlock_irqrestore(&mapping->i_pages, flags); put_swap_page(page, swap); } else { void (*freepage)(struct page *); - void *shadow = NULL; freepage = mapping->a_ops->freepage; /* @@ -1476,6 +1478,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, SetPageActive(page); stat->nr_activate[type] += nr_pages; count_memcg_page_event(page, PGACTIVATE); + workingset_activation(page); } keep_locked: unlock_page(page); From patchwork Mon Mar 23 05:52:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11452387 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6C1FE1731 for ; Mon, 23 Mar 2020 05:52:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 38D4920719 for ; Mon, 23 Mar 2020 05:52:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Qir9PLF3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 38D4920719 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4EF676B000E; Mon, 23 Mar 2020 01:52:55 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4C5C46B0010; Mon, 23 Mar 2020 01:52:55 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4022B6B0032; Mon, 23 Mar 2020 01:52:55 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0123.hostedemail.com [216.40.44.123]) by kanga.kvack.org (Postfix) with ESMTP id 290C46B000E for ; Mon, 23 Mar 2020 01:52:55 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 76BEF4404 for ; Mon, 23 Mar 2020 05:52:55 +0000 (UTC) X-FDA: 76625558310.21.edge27_70ce3dca8681a X-Spam-Summary: 2,0,0,e946197559439599,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1534:1540:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3352:3865:3866:3867:3871:4321:5007:6261:6653:7576:7875:9036:9413:10004:11026:11473:11658:11914:12296:12297:12517:12519:12555:12679:12895:13069:13161:13229:13311:13357:14096:14130:14181:14384:14394:14721:21080:21444:21451:21627:21666:21740:30054:30069,0,RBL:209.85.214.193:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: edge27_70ce3dca8681a X-Filterd-Recvd-Size: 3926 Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Mon, 23 Mar 2020 05:52:55 +0000 (UTC) Received: by mail-pl1-f193.google.com with SMTP id g2so5468681plo.3 for ; Sun, 22 Mar 2020 22:52:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Bs9uueH0blNyu1UBuP0fQK6SW+2/EajcrnZfP/uZxx8=; b=Qir9PLF33Ng5Lj3XADyW+LntTiZ6vnLYv3JMHBnICF+CG575H5klswKFi+hfrJgzUy 1CKWjMYT8s51Lpjf4jJAGWnOldtEI8PuhtzUBlTMN1O7EZxAC0vPRg5FwlfsjwvNXrx8 ui+pAznSrunnWGov60060Trope/AWLBKMnENsYPAALAli1WgpLD/JPVai8NukSUyVnKl 6b253t2EUj/ONBIaghC6QXmhGIqinCPEsVnOK7eKvOvN9HYxj5MPvLNb5p+/VNJz0FIc stReR6Qdu7c5CXaj207Qfvqxta8p3dQe4zDZiS9ldZOYgsoqpSCG2GixPSUdFHMN3/pV 780A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Bs9uueH0blNyu1UBuP0fQK6SW+2/EajcrnZfP/uZxx8=; b=oMQXBoOBniJFac9+qsx18dpWf4f0US70df1oC7VIrxal/zYjt9SQ4NJ/JpsPvUn3Ur NrZ771wg1Bj33CKl0V07cL8/NxP2+2tJehh//ozfKr9tXORGhH18jNGOzAJyNLVBnYXi UcwYmJCI9XeXkw99q4mItDSGwmqEvEAcDK3AjxKZt2AKBvHym+OndxSvwI+akdmgdGvS DyXTR+xCASyY2Bmr8oolm9TpLgJVutnLHZfOeS7cvTPBJT3orT8UoryH7bTrPQrcIToi +Nn6bMx+RFUINsn3i6u9i3jN6O6PrAosLoP+7moR8iA416VNnIZ6CjDAmU5z27SuvCso TLuw== X-Gm-Message-State: ANhLgQ0Bxi2vQim3uLjoYbLvlI8Rxlzi8abAsqDc3IrSVfXaNocV3UsD VyC+9Pef0ACkTdOq35YqF+A= X-Google-Smtp-Source: ADFU+vuzird/5rQcdmAFYzXID8FO2PLNyrazWn68ripd9cs2SNTAxvfsmjDyYomHo93k1G/GZgrxbg== X-Received: by 2002:a17:90a:a40e:: with SMTP id y14mr14654850pjp.63.1584942773645; Sun, 22 Mar 2020 22:52:53 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id y30sm12563058pff.67.2020.03.22.22.52.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 22 Mar 2020 22:52:52 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Hugh Dickins , Minchan Kim , Vlastimil Babka , Mel Gorman , kernel-team@lge.com, Joonsoo Kim Subject: [PATCH v4 7/8] mm/vmscan: restore active/inactive ratio for anonymous LRU Date: Mon, 23 Mar 2020 14:52:11 +0900 Message-Id: <1584942732-2184-8-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1584942732-2184-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1584942732-2184-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim Now, workingset detection is implemented for anonymous LRU. We don't have to worry about the misfound for workingset due to the ratio of active/inactive. Let's restore the ratio. Signed-off-by: Joonsoo Kim Acked-by: Johannes Weiner --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index b37cc26..3d44e32 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2218,7 +2218,7 @@ static bool inactive_is_low(struct lruvec *lruvec, enum lru_list inactive_lru) active = lruvec_page_state(lruvec, NR_LRU_BASE + active_lru); gb = (inactive + active) >> (30 - PAGE_SHIFT); - if (gb && is_file_lru(inactive_lru)) + if (gb) inactive_ratio = int_sqrt(10 * gb); else inactive_ratio = 1; From patchwork Mon Mar 23 05:52:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11452389 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 535B01731 for ; Mon, 23 Mar 2020 05:53:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 097DF20719 for ; Mon, 23 Mar 2020 05:52:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="U3LdV5wX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 097DF20719 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DEE196B0010; Mon, 23 Mar 2020 01:52:58 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D9FCB6B0032; Mon, 23 Mar 2020 01:52:58 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C8E386B0036; Mon, 23 Mar 2020 01:52:58 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0055.hostedemail.com [216.40.44.55]) by kanga.kvack.org (Postfix) with ESMTP id B0EEB6B0010 for ; Mon, 23 Mar 2020 01:52:58 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id F0628181AC9BF for ; Mon, 23 Mar 2020 05:52:58 +0000 (UTC) X-FDA: 76625558436.09.river09_714f800021b58 X-Spam-Summary: 2,0,0,2b83160bcac2972c,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1535:1543:1711:1730:1747:1777:1792:2393:2559:2562:2693:2897:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3874:4117:4250:4321:5007:6261:6653:7576:8957:9413:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12679:12895:12986:13161:13221:13229:14096:14181:14394:14721:21080:21444:21451:21611:21627:21666:21740:21990:30054:30064,0,RBL:209.85.215.196:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: river09_714f800021b58 X-Filterd-Recvd-Size: 6855 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Mon, 23 Mar 2020 05:52:58 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id k191so5489792pgc.13 for ; Sun, 22 Mar 2020 22:52:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=pCvoaUjV6nhAY7DdWWanlLnPt8h9gGv1CiTNrzSMHq0=; b=U3LdV5wXCT0qE7Tlc2w3UL/P7Be1OjppvOJPQr3URLcp3s6maoXt6VSAAbwJlrXJOP vArNITiv22DoQoVuhp8LFfT7tGbfjOZG1x2/05hW1saggT8/72KAzPFK7vCrGQeHCU5/ IY/+Gy1HerPmFA7wkK9iCGcLCM3aGWVnxQQrwAY+gzB5qatZ6MqsDXmAm3edCee5Ek+H cckhgF18DhntoRBIqQayklQ5gkS1QBJodSSkSBS1kwoX0AxVF5ZVNWFWd7mE9dp8XJgk 1ae5WE+JxH15n5l/hYk0GYkOu9zfqZoEfK95dFZUhbZYWnrleNl2tMROtZURMGtr4ivV HSxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pCvoaUjV6nhAY7DdWWanlLnPt8h9gGv1CiTNrzSMHq0=; b=FU2q8KVsdWWilCsugK09PhCS3uoh8xsll04rwjmty0IAF/zXMHU2qttGjk6qNtikTM fbHMLlZwNiPNAI19mVpn5V1L30lZMyN8/obCY4ZYoWsygMkhOuvCLHS4OL4nwjOvG0Zb QXqnNrU9ccAKbbF6h4ijyI4m/R4O9utfHLu/g8a1snFbAHGUzZUV6yqRBKbPAbWQwo3v JxV76OMnZek2KvN39Yv1faaj3XggExWslwkkEo8xKLSfEaMjIpmdpQ+6zNRkWdS/6EiF m0BgFnKpA6JgMQcACTD8TK808ddrUSEiVXHxiObzuFwxEGcSf+DBYdehoo450kBxhpMA NtuA== X-Gm-Message-State: ANhLgQ3pCBuJ1jM48U2MQI8mSiPI91xDBOu4al/3RYeYgo7bCWR2BGHH 5qxyZRlRRdftF4eo+VMolcE= X-Google-Smtp-Source: ADFU+vvR5bvOnRT/IaFbBHPKjbjcgVA0kWVQtjZDJWkZb1ag3VSrIrWPM8cPoqRlRUpuGyyoheeSyQ== X-Received: by 2002:a63:455:: with SMTP id 82mr19485470pge.197.1584942777121; Sun, 22 Mar 2020 22:52:57 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id y30sm12563058pff.67.2020.03.22.22.52.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 22 Mar 2020 22:52:56 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Hugh Dickins , Minchan Kim , Vlastimil Babka , Mel Gorman , kernel-team@lge.com, Joonsoo Kim Subject: [PATCH v4 8/8] mm/swap: count a new anonymous page as a reclaim_state's rotate Date: Mon, 23 Mar 2020 14:52:12 +0900 Message-Id: <1584942732-2184-9-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1584942732-2184-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1584942732-2184-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim reclaim_stat's rotate is used for controlling the ratio of scanning page between file and anonymous LRU. All new anonymous pages are counted for rotate before the patch, protecting anonymous pages on active LRU, and, it makes that reclaim on anonymous LRU is less happened than file LRU. Now, situation is changed. all new anonymous pages are not added to the active LRU so rotate would be far less than before. It will cause that reclaim on anonymous LRU happens more and it would result in bad effect on some system that is optimized for previous setting. Therefore, this patch counts a new anonymous page as a reclaim_state's rotate. Although it is non-logical to add this count to the reclaim_state's rotate in current algorithm, reducing the regression would be more important. I found this regression on kernel-build test and it is roughly 2~5% performance degradation. With this workaround, performance is completely restored. v2: fix a bug that reuses the rotate value for previous page Reported-by: kernel test robot Signed-off-by: Joonsoo Kim --- mm/swap.c | 29 ++++++++++++++++++++++++++++- 1 file changed, 28 insertions(+), 1 deletion(-) diff --git a/mm/swap.c b/mm/swap.c index 442d27e..1f19301 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -187,6 +187,9 @@ int get_kernel_page(unsigned long start, int write, struct page **pages) } EXPORT_SYMBOL_GPL(get_kernel_page); +static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, + void *arg); + static void pagevec_lru_move_fn(struct pagevec *pvec, void (*move_fn)(struct page *page, struct lruvec *lruvec, void *arg), void *arg) @@ -199,6 +202,7 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, for (i = 0; i < pagevec_count(pvec); i++) { struct page *page = pvec->pages[i]; struct pglist_data *pagepgdat = page_pgdat(page); + void *arg_orig = arg; if (pagepgdat != pgdat) { if (pgdat) @@ -207,8 +211,22 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, spin_lock_irqsave(&pgdat->lru_lock, flags); } + if (move_fn == __pagevec_lru_add_fn) { + struct list_head *entry = &page->lru; + unsigned long next = (unsigned long)entry->next; + unsigned long rotate = next & 2; + + if (rotate) { + VM_BUG_ON(arg); + + next = next & ~2; + entry->next = (struct list_head *)next; + arg = (void *)rotate; + } + } lruvec = mem_cgroup_page_lruvec(page, pgdat); (*move_fn)(page, lruvec, arg); + arg = arg_orig; } if (pgdat) spin_unlock_irqrestore(&pgdat->lru_lock, flags); @@ -475,6 +493,14 @@ void lru_cache_add_inactive_or_unevictable(struct page *page, hpage_nr_pages(page)); count_vm_event(UNEVICTABLE_PGMLOCKED); } + + if (PageSwapBacked(page) && !unevictable) { + struct list_head *entry = &page->lru; + unsigned long next = (unsigned long)entry->next; + + next = next | 2; + entry->next = (struct list_head *)next; + } lru_cache_add(page); } @@ -927,6 +953,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, { enum lru_list lru; int was_unevictable = TestClearPageUnevictable(page); + unsigned long rotate = (unsigned long)arg; VM_BUG_ON_PAGE(PageLRU(page), page); @@ -962,7 +989,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, if (page_evictable(page)) { lru = page_lru(page); update_page_reclaim_stat(lruvec, page_is_file_cache(page), - PageActive(page)); + PageActive(page) | rotate); if (was_unevictable) count_vm_event(UNEVICTABLE_PGRESCUED); } else {