From patchwork Wed Aug 19 04:27:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Duyck X-Patchwork-Id: 11722503 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8D738739 for ; Wed, 19 Aug 2020 04:27:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4DB3C207BB for ; Wed, 19 Aug 2020 04:27:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Z+DDR3Mm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4DB3C207BB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7F64A6B0033; Wed, 19 Aug 2020 00:27:43 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7811E8D000C; Wed, 19 Aug 2020 00:27:43 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 621F48D000A; Wed, 19 Aug 2020 00:27:43 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0080.hostedemail.com [216.40.44.80]) by kanga.kvack.org (Postfix) with ESMTP id 45BF16B0033 for ; Wed, 19 Aug 2020 00:27:43 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id EF284362C for ; Wed, 19 Aug 2020 04:27:42 +0000 (UTC) X-FDA: 77166034764.19.trip06_1007adc27025 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id B35221ACEBC for ; Wed, 19 Aug 2020 04:27:42 +0000 (UTC) X-Spam-Summary: 1,0,0,f324cb7f535cc72e,d41d8cd98f00b204,alexander.duyck@gmail.com,,RULES_HIT:41:69:152:355:379:960:966:973:981:988:989:1260:1277:1311:1313:1314:1345:1359:1431:1437:1515:1516:1518:1535:1544:1593:1594:1605:1711:1730:1747:1777:1792:1801:2194:2196:2199:2200:2393:2559:2562:2691:2693:2898:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4118:4250:4385:4605:5007:6120:6261:6653:6742:7576:8660:8957:9010:9413:9592:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12679:12683:12760:13148:13161:13172:13229:13230:13869:14096:14097:14181:14687:14721:21080:21444:21451:21627:21666:21740:21796:21939:21987:21990:30005:30012:30036:30054:30070,0,RBL:209.85.160.195:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100;04y86diep1i4tg4gzcwwjhxdpg8qqocnjkc7oj5rua1tbsep7w6ijj38zcfij9m.qdh7mk3nn3u4sbmrt16aeygpziapoa5qcb6ksf9p8byo31trp9tqnku3opcuf96.r-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,Domain Cache:0, X-HE-Tag: trip06_1007adc27025 X-Filterd-Recvd-Size: 7842 Received: from mail-qt1-f195.google.com (mail-qt1-f195.google.com [209.85.160.195]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Wed, 19 Aug 2020 04:27:42 +0000 (UTC) Received: by mail-qt1-f195.google.com with SMTP id v22so16909390qtq.8 for ; Tue, 18 Aug 2020 21:27:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=taoylAfZ50RvbeR+qDxRIBM5RKgvtzNS5VKzJspUyB8=; b=Z+DDR3MmkmAOpsWWDpyXeYNgIGzTgK9Ddb7UhBT/vJrDAZuGC7IlYskyQk03tzEygi hg0yvwJORukjN33wkxhvlIHH+zvaP6U6lypz2wcfStTBHY8CmTz2F7TgzrpOR/fJm1M7 EFODaa5/wquuu3op5FFYJRBLf5S9RBmRBvBOB54DJqk3ofigrbivIJ1NXhgI5tnxk9EX VX77DTOK8ANaygFl9DWT0sOHN+dtc1RaB7H+Xn9h5/7jP/Epn+84BTlG6LRCCUFBnKKo Ikfqq6/wFEvz43BPxSR6OKSjvmmyvtPFx/O6FVNMW3b4ApIC1B4o5rqHt+Vy/0CELP6F R4ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:from:to:cc:date:message-id:in-reply-to :references:user-agent:mime-version:content-transfer-encoding; bh=taoylAfZ50RvbeR+qDxRIBM5RKgvtzNS5VKzJspUyB8=; b=O6UyTHps0V0kk8juDpQX6ruKjxtT326szLTJ3fxjWe0pPLHqgIzlcPvdBFmZF8rh5t UWeIbVZEmnNkbylWIpVnyBfciBu0p3RthUKl2FSLNwyJnSWgPHRMiWfakXV02H+/8+ET 28yG851/n3wVpK5T2L1V17ZwPkZmKWDDspHYYxPVQ7jEzh/mplv/bNhl8dO4Bkv6LPaI NKbuAQzzkEsyarMcIysOlTby99eznz2ojqtaFdR5/ZzXN0cXln+glggPMZxzcMVPGnlW SbGGT8uyeb6OPx+4csUZJiybqic5YZzq6kc9cRBWWFIwhkWvsJUjdJ5yHQRCfIRT3ohE 5z3A== X-Gm-Message-State: AOAM531nFmZw+q/vuqrg7uYln4+9GTHM/CtEFx58Uu4f1c3QfXefpLd2 u5VmcbH29XwX8fUcQNqTqp4= X-Google-Smtp-Source: ABdhPJyDanWGHvp3JCIJhebJK8Z+qpciCv8gSBa8JKVjUZllBLCq+m9VK5Y9O/7JNh540/o4GxDybA== X-Received: by 2002:ac8:5146:: with SMTP id h6mr21244001qtn.290.1597811261609; Tue, 18 Aug 2020 21:27:41 -0700 (PDT) Received: from localhost.localdomain ([2001:470:b:9c3:9e5c:8eff:fe4f:f2d0]) by smtp.gmail.com with ESMTPSA id g129sm24061413qkb.39.2020.08.18.21.27.39 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 18 Aug 2020 21:27:41 -0700 (PDT) Subject: [RFC PATCH v2 5/5] mm: Split move_pages_to_lru into 3 separate passes From: Alexander Duyck To: alex.shi@linux.alibaba.com Cc: yang.shi@linux.alibaba.com, lkp@intel.com, rong.a.chen@intel.com, khlebnikov@yandex-team.ru, kirill@shutemov.name, hughd@google.com, linux-kernel@vger.kernel.org, alexander.duyck@gmail.com, daniel.m.jordan@oracle.com, linux-mm@kvack.org, shakeelb@google.com, willy@infradead.org, hannes@cmpxchg.org, tj@kernel.org, cgroups@vger.kernel.org, akpm@linux-foundation.org, richard.weiyang@gmail.com, mgorman@techsingularity.net, iamjoonsoo.kim@lge.com Date: Tue, 18 Aug 2020 21:27:38 -0700 Message-ID: <20200819042738.23414.60815.stgit@localhost.localdomain> In-Reply-To: <20200819041852.23414.95939.stgit@localhost.localdomain> References: <20200819041852.23414.95939.stgit@localhost.localdomain> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Rspamd-Queue-Id: B35221ACEBC X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Alexander Duyck The current code for move_pages_to_lru is meant to release the LRU lock every time it encounters an unevictable page or a compound page that must be freed. This results in a fair amount of code bulk because the lruvec has to be reacquired every time the lock is released and reacquired. Instead of doing this I believe we can break the code up into 3 passes. The first pass will identify the pages we can move to LRU and move those. In addition it will sort the list out leaving the unevictable pages in the list and moving those pages that have dropped to a reference count of 0 to pages_to_free. The second pass will return the unevictable pages to the LRU. The final pass will free any compound pages we have in the pages_to_free list before we merge it back with the original list and return from the function. The advantage of doing it this way is that we only have to release the lock between pass 1 and 2, and then we reacquire the lock after pass 3 after we merge the pages_to_free back into the original list. As such we only have to release the lock at most once in an entire call instead of having to test to see if we need to relock with each page. Signed-off-by: Alexander Duyck Reviewed-by: Alex Shi --- mm/vmscan.c | 68 ++++++++++++++++++++++++++++++++++------------------------- 1 file changed, 39 insertions(+), 29 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 3ebe3f9b653b..6a2bdbc1a9eb 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1850,22 +1850,21 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, { int nr_pages, nr_moved = 0; LIST_HEAD(pages_to_free); - struct page *page; - struct lruvec *orig_lruvec = lruvec; + struct page *page, *next; enum lru_list lru; - while (!list_empty(list)) { - page = lru_to_page(list); + list_for_each_entry_safe(page, next, list, lru) { VM_BUG_ON_PAGE(PageLRU(page), page); - list_del(&page->lru); - if (unlikely(!page_evictable(page))) { - if (lruvec) { - spin_unlock_irq(&lruvec->lru_lock); - lruvec = NULL; - } - putback_lru_page(page); + + /* + * if page is unevictable leave it on the list to be returned + * to the LRU after we have finished processing the other + * entries in the list. + */ + if (unlikely(!page_evictable(page))) continue; - } + + list_del(&page->lru); /* * The SetPageLRU needs to be kept here for list intergrity. @@ -1878,20 +1877,14 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, * list_add(&page->lru,) * list_add(&page->lru,) */ - lruvec = relock_page_lruvec_irq(page, lruvec); SetPageLRU(page); if (unlikely(put_page_testzero(page))) { __ClearPageLRU(page); __ClearPageActive(page); - if (unlikely(PageCompound(page))) { - spin_unlock_irq(&lruvec->lru_lock); - lruvec = NULL; - destroy_compound_page(page); - } else - list_add(&page->lru, &pages_to_free); - + /* defer freeing until we can release lru_lock */ + list_add(&page->lru, &pages_to_free); continue; } @@ -1904,16 +1897,33 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, if (PageActive(page)) workingset_age_nonresident(lruvec, nr_pages); } - if (orig_lruvec != lruvec) { - if (lruvec) - spin_unlock_irq(&lruvec->lru_lock); - spin_lock_irq(&orig_lruvec->lru_lock); - } - /* - * To save our caller's stack, now use input list for pages to free. - */ - list_splice(&pages_to_free, list); + if (unlikely(!list_empty(list) || !list_empty(&pages_to_free))) { + spin_unlock_irq(&lruvec->lru_lock); + + /* return any unevictable pages to the LRU list */ + while (!list_empty(list)) { + page = lru_to_page(list); + list_del(&page->lru); + putback_lru_page(page); + } + + /* + * To save our caller's stack use input + * list for pages to free. + */ + list_splice(&pages_to_free, list); + + /* free any compound pages we have in the list */ + list_for_each_entry_safe(page, next, list, lru) { + if (likely(!PageCompound(page))) + continue; + list_del(&page->lru); + destroy_compound_page(page); + } + + spin_lock_irq(&lruvec->lru_lock); + } return nr_moved; }