From patchwork Fri Dec 25 09:59:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11990089 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F540C433DB for ; Fri, 25 Dec 2020 10:02:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D85DB230FF for ; Fri, 25 Dec 2020 10:02:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D85DB230FF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 111ED8D0084; Fri, 25 Dec 2020 05:02:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EDCB18D0080; Fri, 25 Dec 2020 05:02:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA5548D0085; Fri, 25 Dec 2020 05:02:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0122.hostedemail.com [216.40.44.122]) by kanga.kvack.org (Postfix) with ESMTP id C39B98D0080 for ; Fri, 25 Dec 2020 05:02:01 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 8D5D31808B2B6 for ; Fri, 25 Dec 2020 10:02:01 +0000 (UTC) X-FDA: 77631363642.29.tray45_5a0550527479 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id 6B74E180A93AD for ; Fri, 25 Dec 2020 10:02:01 +0000 (UTC) X-HE-Tag: tray45_5a0550527479 X-Filterd-Recvd-Size: 5732 Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com [115.124.30.130]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Fri, 25 Dec 2020 10:01:59 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R471e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04420;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0UJjFIVO_1608890514; Received: from aliy80.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0UJjFIVO_1608890514) by smtp.aliyun-inc.com(127.0.0.1); Fri, 25 Dec 2020 18:01:55 +0800 From: Alex Shi To: willy@infradead.org Cc: tim.c.chen@linux.intel.com, Konstantin Khlebnikov , Hugh Dickins , Yu Zhao , Michal Hocko , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH 1/4] mm/swap.c: pre-sort pages in pagevec for pagevec_lru_move_fn Date: Fri, 25 Dec 2020 17:59:47 +0800 Message-Id: <1608890390-64305-2-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1608890390-64305-1-git-send-email-alex.shi@linux.alibaba.com> References: <20201126155553.GT4327@casper.infradead.org> <1608890390-64305-1-git-send-email-alex.shi@linux.alibaba.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pages in pagevec may have different lruvec, so we have to do relock in function pagevec_lru_move_fn(), but a relock may cause current cpu wait for long time on the same lock for spinlock fairness reason. Before per memcg lru_lock, we have to bear the relock since the spinlock is the only way to serialize page's memcg/lruvec. Now TestClearPageLRU could be used to isolate pages exculsively, and stable the page's lruvec/memcg. So it gives us a chance to sort the page's lruvec before moving action in pagevec_lru_move_fn. Then we don't suffer from the spinlock's fairness wait. Signed-off-by: Alex Shi Cc: Konstantin Khlebnikov Cc: Hugh Dickins Cc: Yu Zhao Cc: Michal Hocko Cc: Matthew Wilcox (Oracle) Cc: Andrew Morton Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/swap.c | 92 +++++++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 79 insertions(+), 13 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index c5363bdebe67..994641331bf7 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -201,29 +201,95 @@ int get_kernel_page(unsigned long start, int write, struct page **pages) } EXPORT_SYMBOL_GPL(get_kernel_page); +/* Pratt's gaps for shell sort, https://en.wikipedia.org/wiki/Shellsort */ +static int gaps[] = { 6, 4, 3, 2, 1, 0}; + +/* Shell sort pagevec[] on page's lruvec.*/ +static void shell_sort(struct pagevec *pvec, unsigned long *lvaddr) +{ + int g, i, j, n = pagevec_count(pvec); + + for (g=0; gaps[g] > 0 && gaps[g] <= n/2; g++) { + int gap = gaps[g]; + + for (i = gap; i < n; i++) { + unsigned long tmp = lvaddr[i]; + struct page *page = pvec->pages[i]; + + for (j = i - gap; j >= 0 && lvaddr[j] > tmp; j -= gap) { + lvaddr[j + gap] = lvaddr[j]; + pvec->pages[j + gap] = pvec->pages[j]; + } + lvaddr[j + gap] = tmp; + pvec->pages[j + gap] = page; + } + } +} + +/* Get lru bit cleared page and their lruvec address, release the others */ +void sort_isopv(struct pagevec *pvec, struct pagevec *isopv, + unsigned long *lvaddr) +{ + int i, j; + struct pagevec busypv; + + pagevec_init(&busypv); + + for (i = 0, j = 0; i < pagevec_count(pvec); i++) { + struct page *page = pvec->pages[i]; + + pvec->pages[i] = NULL; + + /* block memcg migration during page moving between lru */ + if (!TestClearPageLRU(page)) { + pagevec_add(&busypv, page); + continue; + } + lvaddr[j++] = (unsigned long) + mem_cgroup_page_lruvec(page, page_pgdat(page)); + pagevec_add(isopv, page); + } + pagevec_reinit(pvec); + if (pagevec_count(&busypv)) + release_pages(busypv.pages, busypv.nr); + + shell_sort(isopv, lvaddr); +} + static void pagevec_lru_move_fn(struct pagevec *pvec, void (*move_fn)(struct page *page, struct lruvec *lruvec)) { - int i; + int i, n; struct lruvec *lruvec = NULL; unsigned long flags = 0; + unsigned long lvaddr[PAGEVEC_SIZE]; + struct pagevec isopv; - for (i = 0; i < pagevec_count(pvec); i++) { - struct page *page = pvec->pages[i]; + pagevec_init(&isopv); - /* block memcg migration during page moving between lru */ - if (!TestClearPageLRU(page)) - continue; + sort_isopv(pvec, &isopv, lvaddr); - lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags); - (*move_fn)(page, lruvec); + n = pagevec_count(&isopv); + if (!n) + return; - SetPageLRU(page); + lruvec = (struct lruvec *)lvaddr[0]; + spin_lock_irqsave(&lruvec->lru_lock, flags); + + for (i = 0; i < n; i++) { + /* lock new lruvec if lruvec changes, we have sorted them */ + if (lruvec != (struct lruvec *)lvaddr[i]) { + spin_unlock_irqrestore(&lruvec->lru_lock, flags); + lruvec = (struct lruvec *)lvaddr[i]; + spin_lock_irqsave(&lruvec->lru_lock, flags); + } + + (*move_fn)(isopv.pages[i], lruvec); + + SetPageLRU(isopv.pages[i]); } - if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); - release_pages(pvec->pages, pvec->nr); - pagevec_reinit(pvec); + spin_unlock_irqrestore(&lruvec->lru_lock, flags); + release_pages(isopv.pages, isopv.nr); } static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec) From patchwork Fri Dec 25 09:59:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11990091 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5120BC433E0 for ; Fri, 25 Dec 2020 10:02:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D4AF7230FF for ; Fri, 25 Dec 2020 10:02:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D4AF7230FF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 763348D0085; Fri, 25 Dec 2020 05:02:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 713FA8D0080; Fri, 25 Dec 2020 05:02:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 656DE8D0085; Fri, 25 Dec 2020 05:02:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0201.hostedemail.com [216.40.44.201]) by kanga.kvack.org (Postfix) with ESMTP id 4CFF78D0080 for ; Fri, 25 Dec 2020 05:02:02 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 19F1B18059B73 for ; Fri, 25 Dec 2020 10:02:02 +0000 (UTC) X-FDA: 77631363684.06.slip12_3f1753127479 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id BEAD8100372E7 for ; Fri, 25 Dec 2020 10:02:01 +0000 (UTC) X-HE-Tag: slip12_3f1753127479 X-Filterd-Recvd-Size: 3715 Received: from out30-56.freemail.mail.aliyun.com (out30-56.freemail.mail.aliyun.com [115.124.30.56]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Fri, 25 Dec 2020 10:02:00 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R341e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04395;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0UJjFIVO_1608890514; Received: from aliy80.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0UJjFIVO_1608890514) by smtp.aliyun-inc.com(127.0.0.1); Fri, 25 Dec 2020 18:01:55 +0800 From: Alex Shi To: willy@infradead.org Cc: tim.c.chen@linux.intel.com, Konstantin Khlebnikov , Hugh Dickins , Yu Zhao , Michal Hocko , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH 2/4] mm/swap.c: bail out early for no memcg and no numa Date: Fri, 25 Dec 2020 17:59:48 +0800 Message-Id: <1608890390-64305-3-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1608890390-64305-1-git-send-email-alex.shi@linux.alibaba.com> References: <20201126155553.GT4327@casper.infradead.org> <1608890390-64305-1-git-send-email-alex.shi@linux.alibaba.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If a system has memcg disabled and no numa node, like a embedded system, there is no needs to do the pagevec sort, since only just one lruvec in system. In this situation, we could skip the pagevec sorting. Signed-off-by: Alex Shi Cc: Konstantin Khlebnikov Cc: Hugh Dickins Cc: Yu Zhao Cc: Michal Hocko Cc: Matthew Wilcox (Oracle) Cc: Andrew Morton Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/swap.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index 994641331bf7..bb5300b7e321 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -235,6 +235,7 @@ void sort_isopv(struct pagevec *pvec, struct pagevec *isopv, pagevec_init(&busypv); + for (i = 0, j = 0; i < pagevec_count(pvec); i++) { struct page *page = pvec->pages[i]; @@ -253,7 +254,8 @@ void sort_isopv(struct pagevec *pvec, struct pagevec *isopv, if (pagevec_count(&busypv)) release_pages(busypv.pages, busypv.nr); - shell_sort(isopv, lvaddr); + if (!mem_cgroup_disabled() || num_online_nodes() > 1) + shell_sort(isopv, lvaddr); } static void pagevec_lru_move_fn(struct pagevec *pvec, @@ -263,13 +265,12 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, struct lruvec *lruvec = NULL; unsigned long flags = 0; unsigned long lvaddr[PAGEVEC_SIZE]; - struct pagevec isopv; - - pagevec_init(&isopv); + struct pagevec sortedpv; - sort_isopv(pvec, &isopv, lvaddr); + pagevec_init(&sortedpv); + sort_isopv(pvec, &sortedpv, lvaddr); - n = pagevec_count(&isopv); + n = pagevec_count(&sortedpv); if (!n) return; @@ -284,12 +285,12 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, spin_lock_irqsave(&lruvec->lru_lock, flags); } - (*move_fn)(isopv.pages[i], lruvec); + (*move_fn)(sortedpv.pages[i], lruvec); - SetPageLRU(isopv.pages[i]); + SetPageLRU(sortedpv.pages[i]); } spin_unlock_irqrestore(&lruvec->lru_lock, flags); - release_pages(isopv.pages, isopv.nr); + release_pages(sortedpv.pages, sortedpv.nr); } static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec) From patchwork Fri Dec 25 09:59:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11990087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59B84C433E0 for ; Fri, 25 Dec 2020 10:02:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C0D31230FF for ; Fri, 25 Dec 2020 10:02:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C0D31230FF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D30638D0083; Fri, 25 Dec 2020 05:02:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CC2D08D0084; Fri, 25 Dec 2020 05:02:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BCDCE8D0083; Fri, 25 Dec 2020 05:02:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0018.hostedemail.com [216.40.44.18]) by kanga.kvack.org (Postfix) with ESMTP id A59C08D0080 for ; Fri, 25 Dec 2020 05:02:01 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 4B028181AEF1F for ; Fri, 25 Dec 2020 10:02:01 +0000 (UTC) X-FDA: 77631363642.01.print58_01020ce27479 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id 2615810054F07 for ; Fri, 25 Dec 2020 10:02:01 +0000 (UTC) X-HE-Tag: print58_01020ce27479 X-Filterd-Recvd-Size: 4659 Received: from out30-133.freemail.mail.aliyun.com (out30-133.freemail.mail.aliyun.com [115.124.30.133]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Fri, 25 Dec 2020 10:01:59 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R181e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04426;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0UJjFIVO_1608890514; Received: from aliy80.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0UJjFIVO_1608890514) by smtp.aliyun-inc.com(127.0.0.1); Fri, 25 Dec 2020 18:01:56 +0800 From: Alex Shi To: willy@infradead.org Cc: tim.c.chen@linux.intel.com, Konstantin Khlebnikov , Hugh Dickins , Yu Zhao , Michal Hocko , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH 3/4] mm/swap.c: extend the usage to pagevec_lru_add Date: Fri, 25 Dec 2020 17:59:49 +0800 Message-Id: <1608890390-64305-4-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1608890390-64305-1-git-send-email-alex.shi@linux.alibaba.com> References: <20201126155553.GT4327@casper.infradead.org> <1608890390-64305-1-git-send-email-alex.shi@linux.alibaba.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The only different for __pagevec_lru_add and other page moving between lru lists is page to add lru list has no need to do TestClearPageLRU and set the lru bit back. So we could combound them with a clear lru bit switch in sort function parameter. Than all lru list operation functions could be united. Signed-off-by: Alex Shi Cc: Konstantin Khlebnikov Cc: Hugh Dickins Cc: Yu Zhao Cc: Michal Hocko Cc: Matthew Wilcox (Oracle) Cc: Andrew Morton Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/swap.c | 31 ++++++++++++------------------- 1 file changed, 12 insertions(+), 19 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index bb5300b7e321..9a2269e5099b 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -12,6 +12,7 @@ * Started 18.12.91 * Swap aging added 23.2.95, Stephen Tweedie. * Buffermem limits added 12.3.98, Rik van Riel. + * Pre-sort pagevec added 12.1.20, Alex Shi. */ #include @@ -227,8 +228,8 @@ static void shell_sort(struct pagevec *pvec, unsigned long *lvaddr) } /* Get lru bit cleared page and their lruvec address, release the others */ -void sort_isopv(struct pagevec *pvec, struct pagevec *isopv, - unsigned long *lvaddr) +static void sort_isopv(struct pagevec *pvec, struct pagevec *isopv, + unsigned long *lvaddr, bool clearlru) { int i, j; struct pagevec busypv; @@ -242,7 +243,7 @@ void sort_isopv(struct pagevec *pvec, struct pagevec *isopv, pvec->pages[i] = NULL; /* block memcg migration during page moving between lru */ - if (!TestClearPageLRU(page)) { + if (clearlru && !TestClearPageLRU(page)) { pagevec_add(&busypv, page); continue; } @@ -266,9 +267,13 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, unsigned long flags = 0; unsigned long lvaddr[PAGEVEC_SIZE]; struct pagevec sortedpv; + bool clearlru; + + /* don't clear lru bit for new page adding to lru */ + clearlru = pvec != this_cpu_ptr(&lru_pvecs.lru_add); pagevec_init(&sortedpv); - sort_isopv(pvec, &sortedpv, lvaddr); + sort_isopv(pvec, &sortedpv, lvaddr, clearlru); n = pagevec_count(&sortedpv); if (!n) @@ -287,7 +292,8 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, (*move_fn)(sortedpv.pages[i], lruvec); - SetPageLRU(sortedpv.pages[i]); + if (clearlru) + SetPageLRU(sortedpv.pages[i]); } spin_unlock_irqrestore(&lruvec->lru_lock, flags); release_pages(sortedpv.pages, sortedpv.nr); @@ -1111,20 +1117,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec) */ void __pagevec_lru_add(struct pagevec *pvec) { - int i; - struct lruvec *lruvec = NULL; - unsigned long flags = 0; - - for (i = 0; i < pagevec_count(pvec); i++) { - struct page *page = pvec->pages[i]; - - lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags); - __pagevec_lru_add_fn(page, lruvec); - } - if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); - release_pages(pvec->pages, pvec->nr); - pagevec_reinit(pvec); + pagevec_lru_move_fn(pvec, __pagevec_lru_add_fn); } /**