From patchwork Fri Apr 3 20:41:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 11473559 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7BC86912 for ; Fri, 3 Apr 2020 20:41:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 48A9E217D8 for ; Fri, 3 Apr 2020 20:41:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 48A9E217D8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 55BB58E0008; Fri, 3 Apr 2020 16:41:46 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 50C1C8E0007; Fri, 3 Apr 2020 16:41:46 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 448718E0008; Fri, 3 Apr 2020 16:41:46 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0067.hostedemail.com [216.40.44.67]) by kanga.kvack.org (Postfix) with ESMTP id 2B6CF8E0007 for ; Fri, 3 Apr 2020 16:41:46 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id DE66F180AD81D for ; Fri, 3 Apr 2020 20:41:45 +0000 (UTC) X-FDA: 76667714970.19.sack86_497fe11516831 X-Spam-Summary: 2,0,0,aacc6470a33dafef,d41d8cd98f00b204,yang.shi@linux.alibaba.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1261:1345:1431:1437:1534:1542:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2911:3138:3139:3140:3141:3142:3353:3865:3867:3868:3870:3871:3872:4250:4321:4385:4425:5007:6117:6119:6261:7903:8957:9010:9592:10004:11026:11473:11658:11914:12043:12048:12296:12297:12438:12555:12895:12986:13146:13230:14093:14096:14181:14721:21060:21080:21451:21627:30054:30064,0,RBL:115.124.30.133:@linux.alibaba.com:.lbl8.mailshell.net-64.201.201.201 62.20.2.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: sack86_497fe11516831 X-Filterd-Recvd-Size: 3353 Received: from out30-133.freemail.mail.aliyun.com (out30-133.freemail.mail.aliyun.com [115.124.30.133]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Fri, 3 Apr 2020 20:41:44 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R131e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04357;MF=yang.shi@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0TuXBChy_1585946493; Received: from localhost(mailfrom:yang.shi@linux.alibaba.com fp:SMTPD_---0TuXBChy_1585946493) by smtp.aliyun-inc.com(127.0.0.1); Sat, 04 Apr 2020 04:41:40 +0800 From: Yang Shi To: daniel.m.jordan@oracle.com, kirill.shutemov@linux.intel.com, hughd@google.com, aarcange@redhat.com, akpm@linux-foundation.org Cc: yang.shi@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH] mm: thp: don't need drain lru cache when splitting and mlocking THP Date: Sat, 4 Apr 2020 04:41:33 +0800 Message-Id: <1585946493-7531-1-git-send-email-yang.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since the commit 8f182270dfec ("mm/swap.c: flush lru pvecs on compound page arrival") THP would not stay in pagevec anymore. So the optimization made by commit d965432234db ("thp: increase split_huge_page() success rate") doesn't make sense anymore, which tries to unpin munlocked THPs from pagevec by draining pagevec. Draining lru cache before isolating THP in mlock path is also unnecessary. b676b293fb48 ("mm, thp: fix mapped pages avoiding unevictable list on mlock") added it and 9a73f61bdb8a ("thp, mlock: do not mlock PTE-mapped file huge pages") accidentally carried it over after the above optimization went in. Reviewed-by: Daniel Jordan Acked-by: Kirill A. Shutemov Cc: Hugh Dickins Cc: Andrea Arcangeli Signed-off-by: Yang Shi --- v2: * Adopted comment from Daniel about the commit log. * Collected Review and Ack from Daniel and Kirill. mm/huge_memory.c | 7 ------- 1 file changed, 7 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index b08b199..1af2e7d6 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1527,7 +1527,6 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, goto skip_mlock; if (!trylock_page(page)) goto skip_mlock; - lru_add_drain(); if (page->mapping && !PageDoubleMap(page)) mlock_vma_page(page); unlock_page(page); @@ -2711,7 +2710,6 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; int count, mapcount, extra_pins, ret; - bool mlocked; unsigned long flags; pgoff_t end; @@ -2770,14 +2768,9 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) goto out_unlock; } - mlocked = PageMlocked(head); unmap_page(head); VM_BUG_ON_PAGE(compound_mapcount(head), head); - /* Make sure the page is not on per-CPU pagevec as it takes pin */ - if (mlocked) - lru_add_drain(); - /* prevent PageLRU to go away from under us, and freeze lru stats */ spin_lock_irqsave(&pgdata->lru_lock, flags);