From patchwork Tue Aug 11 11:10:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11709081 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4B45514E3 for ; Tue, 11 Aug 2020 11:11:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 21E912075D for ; Tue, 11 Aug 2020 11:11:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 21E912075D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 383398D0002; Tue, 11 Aug 2020 07:11:07 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 359DD6B000D; Tue, 11 Aug 2020 07:11:07 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 247798D0002; Tue, 11 Aug 2020 07:11:07 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0061.hostedemail.com [216.40.44.61]) by kanga.kvack.org (Postfix) with ESMTP id 101EB6B000C for ; Tue, 11 Aug 2020 07:11:07 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id BE1D552C1 for ; Tue, 11 Aug 2020 11:11:06 +0000 (UTC) X-FDA: 77138020932.04.grade31_160e7b826fe2 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 83EF6800967E for ; Tue, 11 Aug 2020 11:11:06 +0000 (UTC) X-Spam-Summary: 1,0,0,1c2d57aa1485352c,d41d8cd98f00b204,alex.shi@linux.alibaba.com,,RULES_HIT:41:355:379:541:800:960:968:973:988:989:1260:1261:1345:1431:1437:1534:1543:1711:1730:1747:1777:1792:2194:2199:2393:2559:2562:3138:3139:3140:3141:3142:3354:3865:3867:3868:3870:3871:4321:4605:5007:6119:6261:7514:7875:10004:11026:11473:11658:11914:12043:12296:12297:12438:12555:12895:13846:14096:14181:14721:21060:21080:21451:21627:21660:21990:30054:30056:30070,0,RBL:115.124.30.42:@linux.alibaba.com:.lbl8.mailshell.net-64.201.201.201 62.20.2.100;04y89kowafpa7fwzgudu7nqi4yztiycosfr6dx1zftaj6q9f499zt7sjbdmua5o.qj8fcdyhn354fnkcb48car9zarryrgkg4s8ddq4jecr1zgcc8jcn4kgb6utnrpm.c-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: grade31_160e7b826fe2 X-Filterd-Recvd-Size: 4559 Received: from out30-42.freemail.mail.aliyun.com (out30-42.freemail.mail.aliyun.com [115.124.30.42]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Aug 2020 11:11:04 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R191e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07484;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0U5TLMT5_1597144252; Received: from aliy80.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U5TLMT5_1597144252) by smtp.aliyun-inc.com(127.0.0.1); Tue, 11 Aug 2020 19:10:52 +0800 From: Alex Shi To: akpm@linux-foundation.org Cc: Johannes Weiner , Michal Hocko , Vladimir Davydov , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [Resend PATCH 1/6] mm/memcg: warning on !memcg after readahead page charged Date: Tue, 11 Aug 2020 19:10:27 +0800 Message-Id: <1597144232-11370-1-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 X-Rspamd-Queue-Id: 83EF6800967E X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since readahead page is charged on memcg too, in theory we don't have to check this exception now. Before safely remove them all, add a warning for the unexpected !memcg. Signed-off-by: Alex Shi Acked-by: Michal Hocko Cc: Johannes Weiner Cc: Michal Hocko Cc: Vladimir Davydov Cc: Andrew Morton Cc: cgroups@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- include/linux/mmdebug.h | 13 +++++++++++++ mm/memcontrol.c | 15 ++++++++------- 2 files changed, 21 insertions(+), 7 deletions(-) diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h index 2ad72d2c8cc5..4ed52879ce55 100644 --- a/include/linux/mmdebug.h +++ b/include/linux/mmdebug.h @@ -37,6 +37,18 @@ BUG(); \ } \ } while (0) +#define VM_WARN_ON_ONCE_PAGE(cond, page) ({ \ + static bool __section(.data.once) __warned; \ + int __ret_warn_once = !!(cond); \ + \ + if (unlikely(__ret_warn_once && !__warned)) { \ + dump_page(page, "VM_WARN_ON_ONCE_PAGE(" __stringify(cond)")");\ + __warned = true; \ + WARN_ON(1); \ + } \ + unlikely(__ret_warn_once); \ +}) + #define VM_WARN_ON(cond) (void)WARN_ON(cond) #define VM_WARN_ON_ONCE(cond) (void)WARN_ON_ONCE(cond) #define VM_WARN_ONCE(cond, format...) (void)WARN_ONCE(cond, format) @@ -48,6 +60,7 @@ #define VM_BUG_ON_MM(cond, mm) VM_BUG_ON(cond) #define VM_WARN_ON(cond) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ON_ONCE(cond) BUILD_BUG_ON_INVALID(cond) +#define VM_WARN_ON_ONCE_PAGE(cond, page) BUILD_BUG_ON_INVALID(cond) #define VM_WARN_ONCE(cond, format...) BUILD_BUG_ON_INVALID(cond) #define VM_WARN(cond, format...) BUILD_BUG_ON_INVALID(cond) #endif diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 130093bdf74b..299382fc55a9 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1322,10 +1322,8 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgd } memcg = page->mem_cgroup; - /* - * Swapcache readahead pages are added to the LRU - and - * possibly migrated - before they are charged. - */ + /* Readahead page is charged too, to see if other page uncharged */ + VM_WARN_ON_ONCE_PAGE(!memcg, page); if (!memcg) memcg = root_mem_cgroup; @@ -6906,8 +6904,9 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) if (newpage->mem_cgroup) return; - /* Swapcache readahead pages can get replaced before being charged */ memcg = oldpage->mem_cgroup; + /* Readahead page is charged too, to see if other page uncharged */ + VM_WARN_ON_ONCE_PAGE(!memcg, oldpage); if (!memcg) return; @@ -7104,7 +7103,8 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) memcg = page->mem_cgroup; - /* Readahead page, never charged */ + /* Readahead page is charged too, to see if other page uncharged */ + VM_WARN_ON_ONCE_PAGE(!memcg, page); if (!memcg) return; @@ -7168,7 +7168,8 @@ int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) memcg = page->mem_cgroup; - /* Readahead page, never charged */ + /* Readahead page is charged too, to see if other page uncharged */ + VM_WARN_ON_ONCE_PAGE(!memcg, page); if (!memcg) return 0; From patchwork Tue Aug 11 11:10:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11709077 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3675A14F6 for ; Tue, 11 Aug 2020 11:10:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0CCDC2075D for ; Tue, 11 Aug 2020 11:10:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0CCDC2075D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4E1FE6B0007; Tue, 11 Aug 2020 07:10:58 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4B96B8D0002; Tue, 11 Aug 2020 07:10:58 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3F5BD8D0001; Tue, 11 Aug 2020 07:10:58 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0249.hostedemail.com [216.40.44.249]) by kanga.kvack.org (Postfix) with ESMTP id 28E6E6B0007 for ; Tue, 11 Aug 2020 07:10:58 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id DD54552BF for ; Tue, 11 Aug 2020 11:10:57 +0000 (UTC) X-FDA: 77138020554.14.rod76_1e0ad0f26fe2 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id A2C6218229837 for ; Tue, 11 Aug 2020 11:10:57 +0000 (UTC) X-Spam-Summary: 1,0,0,a41b5b189e1bb5eb,d41d8cd98f00b204,alex.shi@linux.alibaba.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1261:1345:1359:1431:1437:1534:1541:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3352:3865:3868:5007:6261:7514:7903:10004:11026:11473:11658:11914:12043:12296:12297:12438:12555:12895:13069:13311:13357:13846:14096:14181:14384:14721:21060:21080:21451:21627:21966:21990:30054:30064:30070,0,RBL:115.124.30.56:@linux.alibaba.com:.lbl8.mailshell.net-64.201.201.201 62.20.2.100;04yrizkna7q7gkpuqefaenqfjbrw3ypst5qxbqia36teqt7kfmp1bqnctzr9gxx.e7x7nw696kkdkzqam6qjzuks7hbdo7o6dqiurubjn37gpk9tr46zwa6q8zybofo.o-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: rod76_1e0ad0f26fe2 X-Filterd-Recvd-Size: 2590 Received: from out30-56.freemail.mail.aliyun.com (out30-56.freemail.mail.aliyun.com [115.124.30.56]) by imf14.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Aug 2020 11:10:56 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R841e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01f04397;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0U5TLMT5_1597144252; Received: from aliy80.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U5TLMT5_1597144252) by smtp.aliyun-inc.com(127.0.0.1); Tue, 11 Aug 2020 19:10:52 +0800 From: Alex Shi To: akpm@linux-foundation.org Cc: Johannes Weiner , Michal Hocko , Vladimir Davydov , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [Resend PATCH 2/6] mm/memcg: remove useless check on page->mem_cgroup Date: Tue, 11 Aug 2020 19:10:28 +0800 Message-Id: <1597144232-11370-2-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1597144232-11370-1-git-send-email-alex.shi@linux.alibaba.com> References: <1597144232-11370-1-git-send-email-alex.shi@linux.alibaba.com> X-Rspamd-Queue-Id: A2C6218229837 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we disabled memcg by cgroup_disable=memory, the swap charges are still called. Let's return from the funcs earlier and keep WARN_ON monitor. Signed-off-by: Alex Shi Reviewed-by: Roman Gushchin Acked-by: Michal Hocko Cc: Johannes Weiner Cc: Michal Hocko Cc: Vladimir Davydov Cc: Andrew Morton Cc: cgroups@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Alex Shi Reviewed-by: Roman Gushchin Acked-by: Michal Hocko Signed-off-by: Alex Shi Reviewed-by: Roman Gushchin Acked-by: Michal Hocko Signed-off-by: Alex Shi Reviewed-by: Roman Gushchin Acked-by: Michal Hocko --- mm/memcontrol.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 299382fc55a9..419cf565f40b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -7098,6 +7098,9 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) VM_BUG_ON_PAGE(PageLRU(page), page); VM_BUG_ON_PAGE(page_count(page), page); + if (mem_cgroup_disabled()) + return; + if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) return; @@ -7163,6 +7166,9 @@ int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) struct mem_cgroup *memcg; unsigned short oldid; + if (mem_cgroup_disabled()) + return 0; + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) return 0; From patchwork Tue Aug 11 11:10:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11709083 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0D21514E3 for ; Tue, 11 Aug 2020 11:11:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D81EE2076B for ; Tue, 11 Aug 2020 11:11:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D81EE2076B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2E7BB6B000C; Tue, 11 Aug 2020 07:11:08 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 296A58D0003; Tue, 11 Aug 2020 07:11:08 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 186736B000E; Tue, 11 Aug 2020 07:11:08 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0128.hostedemail.com [216.40.44.128]) by kanga.kvack.org (Postfix) with ESMTP id F17996B000C for ; Tue, 11 Aug 2020 07:11:07 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id B8119180AD806 for ; Tue, 11 Aug 2020 11:11:07 +0000 (UTC) X-FDA: 77138020974.01.bead50_27137dd26fe2 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id 8F67C10047820 for ; Tue, 11 Aug 2020 11:11:07 +0000 (UTC) X-Spam-Summary: 1,0,0,5cc370ff8fd2dce4,d41d8cd98f00b204,alex.shi@linux.alibaba.com,,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1261:1345:1359:1431:1437:1535:1544:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2731:2890:3138:3139:3140:3141:3142:3355:3865:3868:3870:3871:3872:4042:4321:4605:5007:6261:7903:8957:9010:9592:10004:11026:11232:11473:11658:11914:12043:12291:12296:12297:12438:12555:12683:12895:12986:13161:13229:13846:14096:14181:14721:21060:21080:21450:21451:21627:21740:21987:30001:30054,0,RBL:115.124.30.132:@linux.alibaba.com:.lbl8.mailshell.net-64.201.201.201 62.20.2.100;04yr1yqpoxgb8k1i1rnsjxkpgbwj3ycgb96j696bbuzybxx1i94x64xx5ttw7qx.rcaehx7k3yq44dnupqxzn7giqpqoux7xrn9wt44mnxdizaanoywm1a91886oyi9.h-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: bead50_27137dd26fe2 X-Filterd-Recvd-Size: 5265 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Aug 2020 11:11:02 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R681e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01f04427;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0U5TLMT5_1597144252; Received: from aliy80.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U5TLMT5_1597144252) by smtp.aliyun-inc.com(127.0.0.1); Tue, 11 Aug 2020 19:10:53 +0800 From: Alex Shi To: akpm@linux-foundation.org Cc: Johannes Weiner , Matthew Wilcox , Hugh Dickins , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [Resend PATCH 3/6] mm/thp: move lru_add_page_tail func to huge_memory.c Date: Tue, 11 Aug 2020 19:10:29 +0800 Message-Id: <1597144232-11370-3-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1597144232-11370-1-git-send-email-alex.shi@linux.alibaba.com> References: <1597144232-11370-1-git-send-email-alex.shi@linux.alibaba.com> X-Rspamd-Queue-Id: 8F67C10047820 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The func is only used in huge_memory.c, defining it in other file with a CONFIG_TRANSPARENT_HUGEPAGE macro restrict just looks weird. Let's move it THP. And make it static as Hugh Dickin suggested. Signed-off-by: Alex Shi Reviewed-by: Kirill A. Shutemov Cc: Andrew Morton Cc: Johannes Weiner Cc: Matthew Wilcox Cc: Hugh Dickins Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/swap.h | 2 -- mm/huge_memory.c | 30 ++++++++++++++++++++++++++++++ mm/swap.c | 33 --------------------------------- 3 files changed, 30 insertions(+), 35 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 661046994db4..43e6b3458f58 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -338,8 +338,6 @@ extern void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages); extern void lru_note_cost_page(struct page *); extern void lru_cache_add(struct page *); -extern void lru_add_page_tail(struct page *page, struct page *page_tail, - struct lruvec *lruvec, struct list_head *head); extern void activate_page(struct page *); extern void mark_page_accessed(struct page *); extern void lru_add_drain(void); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 90733cefa528..bc905e7079bf 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2315,6 +2315,36 @@ static void remap_page(struct page *page) } } +static void lru_add_page_tail(struct page *page, struct page *page_tail, + struct lruvec *lruvec, struct list_head *list) +{ + VM_BUG_ON_PAGE(!PageHead(page), page); + VM_BUG_ON_PAGE(PageCompound(page_tail), page); + VM_BUG_ON_PAGE(PageLRU(page_tail), page); + lockdep_assert_held(&lruvec_pgdat(lruvec)->lru_lock); + + if (!list) + SetPageLRU(page_tail); + + if (likely(PageLRU(page))) + list_add_tail(&page_tail->lru, &page->lru); + else if (list) { + /* page reclaim is reclaiming a huge page */ + get_page(page_tail); + list_add_tail(&page_tail->lru, list); + } else { + /* + * Head page has not yet been counted, as an hpage, + * so we must account for each subpage individually. + * + * Put page_tail on the list at the correct position + * so they all end up in order. + */ + add_page_to_lru_list_tail(page_tail, lruvec, + page_lru(page_tail)); + } +} + static void __split_huge_page_tail(struct page *head, int tail, struct lruvec *lruvec, struct list_head *list) { diff --git a/mm/swap.c b/mm/swap.c index d16d65d9b4e0..c674fb441fe9 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -935,39 +935,6 @@ void __pagevec_release(struct pagevec *pvec) } EXPORT_SYMBOL(__pagevec_release); -#ifdef CONFIG_TRANSPARENT_HUGEPAGE -/* used by __split_huge_page_refcount() */ -void lru_add_page_tail(struct page *page, struct page *page_tail, - struct lruvec *lruvec, struct list_head *list) -{ - VM_BUG_ON_PAGE(!PageHead(page), page); - VM_BUG_ON_PAGE(PageCompound(page_tail), page); - VM_BUG_ON_PAGE(PageLRU(page_tail), page); - lockdep_assert_held(&lruvec_pgdat(lruvec)->lru_lock); - - if (!list) - SetPageLRU(page_tail); - - if (likely(PageLRU(page))) - list_add_tail(&page_tail->lru, &page->lru); - else if (list) { - /* page reclaim is reclaiming a huge page */ - get_page(page_tail); - list_add_tail(&page_tail->lru, list); - } else { - /* - * Head page has not yet been counted, as an hpage, - * so we must account for each subpage individually. - * - * Put page_tail on the list at the correct position - * so they all end up in order. - */ - add_page_to_lru_list_tail(page_tail, lruvec, - page_lru(page_tail)); - } -} -#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ - static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, void *arg) { From patchwork Tue Aug 11 11:10:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11709087 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 368F614F6 for ; Tue, 11 Aug 2020 11:16:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0370D20768 for ; Tue, 11 Aug 2020 11:16:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0370D20768 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 135496B000D; Tue, 11 Aug 2020 07:16:23 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0E6936B000E; Tue, 11 Aug 2020 07:16:23 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F16E66B0010; Tue, 11 Aug 2020 07:16:22 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0043.hostedemail.com [216.40.44.43]) by kanga.kvack.org (Postfix) with ESMTP id D8C5E6B000D for ; Tue, 11 Aug 2020 07:16:22 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 7CBFB8248047 for ; Tue, 11 Aug 2020 11:16:22 +0000 (UTC) X-FDA: 77138034204.23.desk76_3809b3426fe2 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 496D13760C for ; Tue, 11 Aug 2020 11:16:22 +0000 (UTC) X-Spam-Summary: 1,0,0,dbc663041d2621ed,d41d8cd98f00b204,alex.shi@linux.alibaba.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1261:1345:1359:1431:1437:1534:1541:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3352:3865:3867:3871:3872:4321:5007:6261:7903:8957:9010:10004:11026:11232:11473:11658:11914:12043:12296:12297:12555:12895:13069:13311:13357:13846:14096:14181:14384:14721:21060:21080:21451:21627:30054,0,RBL:115.124.30.130:@linux.alibaba.com:.lbl8.mailshell.net-62.20.2.100 64.201.201.201;04y8a9djg4g34jxxjbyhmufjx4dk9oc5s7xb5ypgb3wr3qfhkr3attoc6hc14xw.t7imokahn3t3xkh4gwe98gwxuc1qkh7kz44up1h5g3rpbichguw5bky7uibbqsf.k-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:38,LUA_SUMMARY:none X-HE-Tag: desk76_3809b3426fe2 X-Filterd-Recvd-Size: 2831 Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com [115.124.30.130]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Aug 2020 11:16:20 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R171e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01355;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0U5TLMT5_1597144252; Received: from aliy80.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U5TLMT5_1597144252) by smtp.aliyun-inc.com(127.0.0.1); Tue, 11 Aug 2020 19:10:53 +0800 From: Alex Shi To: akpm@linux-foundation.org Cc: Johannes Weiner , Matthew Wilcox , Hugh Dickins , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [Resend PATCH 4/6] mm/thp: clean up lru_add_page_tail Date: Tue, 11 Aug 2020 19:10:30 +0800 Message-Id: <1597144232-11370-4-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1597144232-11370-1-git-send-email-alex.shi@linux.alibaba.com> References: <1597144232-11370-1-git-send-email-alex.shi@linux.alibaba.com> X-Rspamd-Queue-Id: 496D13760C X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since the first parameter is only used by head page, it's better to make it explicit. Signed-off-by: Alex Shi Reviewed-by: Kirill A. Shutemov Cc: Andrew Morton Cc: Johannes Weiner Cc: Matthew Wilcox Cc: Hugh Dickins Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/huge_memory.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index bc905e7079bf..8cecd39bd8b7 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2315,19 +2315,19 @@ static void remap_page(struct page *page) } } -static void lru_add_page_tail(struct page *page, struct page *page_tail, +static void lru_add_page_tail(struct page *head, struct page *page_tail, struct lruvec *lruvec, struct list_head *list) { - VM_BUG_ON_PAGE(!PageHead(page), page); - VM_BUG_ON_PAGE(PageCompound(page_tail), page); - VM_BUG_ON_PAGE(PageLRU(page_tail), page); + VM_BUG_ON_PAGE(!PageHead(head), head); + VM_BUG_ON_PAGE(PageCompound(page_tail), head); + VM_BUG_ON_PAGE(PageLRU(page_tail), head); lockdep_assert_held(&lruvec_pgdat(lruvec)->lru_lock); if (!list) SetPageLRU(page_tail); - if (likely(PageLRU(page))) - list_add_tail(&page_tail->lru, &page->lru); + if (likely(PageLRU(head))) + list_add_tail(&page_tail->lru, &head->lru); else if (list) { /* page reclaim is reclaiming a huge page */ get_page(page_tail); From patchwork Tue Aug 11 11:10:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11709079 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D6CF914E3 for ; Tue, 11 Aug 2020 11:11:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A03862075D for ; Tue, 11 Aug 2020 11:11:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A03862075D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 925F38D0001; Tue, 11 Aug 2020 07:10:58 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8D7A36B000A; Tue, 11 Aug 2020 07:10:58 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7EDCD8D0001; Tue, 11 Aug 2020 07:10:58 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0199.hostedemail.com [216.40.44.199]) by kanga.kvack.org (Postfix) with ESMTP id 5D6BE6B0008 for ; Tue, 11 Aug 2020 07:10:58 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 1FC83181AEF0B for ; Tue, 11 Aug 2020 11:10:58 +0000 (UTC) X-FDA: 77138020596.07.legs65_0d0d33126fe2 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id F087A1803F9AD for ; Tue, 11 Aug 2020 11:10:57 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,alex.shi@linux.alibaba.com,,RULES_HIT:30001:30012:30054,0,RBL:47.88.44.36:@linux.alibaba.com:.lbl8.mailshell.net-64.10.201.10 62.18.0.100;04ygkcwnsi9wbeth5ijcx844fns6topzmgmtcnji8x6qd6t3q7u8zat1raxuzyt.s8779rtxujqistzjehdwfcf3ki6oqb5p7tdydsgaoesgf8ksh77utpnui3sum8o.w-lbl8.mailshell.net-223.238.255.100;47.88.44.36-irl.urbl.hostedemail.com-127.0.0.150,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: legs65_0d0d33126fe2 X-Filterd-Recvd-Size: 2894 Received: from out4436.biz.mail.alibaba.com (out4436.biz.mail.alibaba.com [47.88.44.36]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Aug 2020 11:10:56 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R121e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01422;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0U5TLMT5_1597144252; Received: from aliy80.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U5TLMT5_1597144252) by smtp.aliyun-inc.com(127.0.0.1); Tue, 11 Aug 2020 19:10:53 +0800 From: Alex Shi To: akpm@linux-foundation.org Cc: "Kirill A. Shutemov" , Johannes Weiner , Matthew Wilcox , Hugh Dickins , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [Resend PATCH 5/6] mm/thp: remove code path which never got into Date: Tue, 11 Aug 2020 19:10:31 +0800 Message-Id: <1597144232-11370-5-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1597144232-11370-1-git-send-email-alex.shi@linux.alibaba.com> References: <1597144232-11370-1-git-send-email-alex.shi@linux.alibaba.com> X-Rspamd-Queue-Id: F087A1803F9AD X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: split_huge_page() will never call on a page which isn't on lru list, so this code never got a chance to run, and should not be run, to add tail pages on a lru list which head page isn't there. Although the bug was never triggered, it'better be removed for code correctness, and add a warn for unexpected calling. Signed-off-by: Alex Shi Reviewed-by: Kirill A. Shutemov Cc: Kirill A. Shutemov Cc: Andrew Morton Cc: Johannes Weiner Cc: Matthew Wilcox Cc: Hugh Dickins Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/huge_memory.c | 13 ++----------- 1 file changed, 2 insertions(+), 11 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 8cecd39bd8b7..d55e3006c63f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2332,17 +2332,8 @@ static void lru_add_page_tail(struct page *head, struct page *page_tail, /* page reclaim is reclaiming a huge page */ get_page(page_tail); list_add_tail(&page_tail->lru, list); - } else { - /* - * Head page has not yet been counted, as an hpage, - * so we must account for each subpage individually. - * - * Put page_tail on the list at the correct position - * so they all end up in order. - */ - add_page_to_lru_list_tail(page_tail, lruvec, - page_lru(page_tail)); - } + } else + VM_WARN_ON(!PageLRU(head)); } static void __split_huge_page_tail(struct page *head, int tail, From patchwork Tue Aug 11 11:10:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11709085 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 287D214F6 for ; Tue, 11 Aug 2020 11:11:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F33442076B for ; Tue, 11 Aug 2020 11:11:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F33442076B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0761A8D0005; Tue, 11 Aug 2020 07:11:12 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id F19168D0003; Tue, 11 Aug 2020 07:11:11 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E55218D0005; Tue, 11 Aug 2020 07:11:11 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0077.hostedemail.com [216.40.44.77]) by kanga.kvack.org (Postfix) with ESMTP id CA8988D0003 for ; Tue, 11 Aug 2020 07:11:11 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 8653A824805A for ; Tue, 11 Aug 2020 11:11:11 +0000 (UTC) X-FDA: 77138021142.01.rice94_590a06026fe2 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id 4CA3A1004933D for ; Tue, 11 Aug 2020 11:11:11 +0000 (UTC) X-Spam-Summary: 1,0,0,78eeed81cb7f68c0,d41d8cd98f00b204,alex.shi@linux.alibaba.com,,RULES_HIT:41:69:355:379:541:800:960:966:973:981:988:989:1260:1261:1345:1359:1431:1437:1535:1544:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2895:2904:3138:3139:3140:3141:3142:3355:3865:3867:3868:3870:3872:4385:4605:5007:6261:7514:8957:9010:9592:10004:11026:11473:11658:11914:12043:12296:12297:12438:12555:12679:12895:13846:14096:14181:14721:21060:21080:21451:21627:21740:21966:30054:30070,0,RBL:115.124.30.132:@linux.alibaba.com:.lbl8.mailshell.net-62.20.2.100 64.201.201.201;04yfzchwmkty9m19g53z9ix3bxdh1opbrneor45ngos58drfmgue9znkitragyw.r4jd531fbwfszn53uo79kkgoibp5w11ugndxdbg1d3ycho58utd1bda49o6jcod.s-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: rice94_590a06026fe2 X-Filterd-Recvd-Size: 5869 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Aug 2020 11:11:08 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R381e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04426;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0U5TLMT5_1597144252; Received: from aliy80.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U5TLMT5_1597144252) by smtp.aliyun-inc.com(127.0.0.1); Tue, 11 Aug 2020 19:10:54 +0800 From: Alex Shi To: akpm@linux-foundation.org Cc: Wei Yang , Hugh Dickins , "Kirill A. Shutemov" , Andrea Arcangeli , Johannes Weiner , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [Resend PATCH 6/6] mm/thp: narrow lru locking Date: Tue, 11 Aug 2020 19:10:32 +0800 Message-Id: <1597144232-11370-6-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1597144232-11370-1-git-send-email-alex.shi@linux.alibaba.com> References: <1597144232-11370-1-git-send-email-alex.shi@linux.alibaba.com> X-Rspamd-Queue-Id: 4CA3A1004933D X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: lru_lock and page cache xa_lock have no reason with current sequence, put them together isn't necessary. let's narrow the lru locking, but left the local_irq_disable to block interrupt re-entry and statistic update. Hugh Dickins point: split_huge_page_to_list() was already silly,to be using the _irqsave variant: it's just been taking sleeping locks, so would already be broken if entered with interrupts enabled. so we can save passing flags argument down to __split_huge_page(). Signed-off-by: Alex Shi Signed-off-by: Wei Yang Reviewed-by: Kirill A. Shutemov Cc: Hugh Dickins Cc: Kirill A. Shutemov Cc: Andrea Arcangeli Cc: Johannes Weiner Cc: Matthew Wilcox Cc: Andrew Morton Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/huge_memory.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d55e3006c63f..e9c31d91da8c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2399,7 +2399,7 @@ static void __split_huge_page_tail(struct page *head, int tail, } static void __split_huge_page(struct page *page, struct list_head *list, - pgoff_t end, unsigned long flags) + pgoff_t end) { struct page *head = compound_head(page); pg_data_t *pgdat = page_pgdat(head); @@ -2408,8 +2408,6 @@ static void __split_huge_page(struct page *page, struct list_head *list, unsigned long offset = 0; int i; - lruvec = mem_cgroup_page_lruvec(head, pgdat); - /* complete memcg works before add pages to LRU */ mem_cgroup_split_huge_fixup(head); @@ -2421,6 +2419,11 @@ static void __split_huge_page(struct page *page, struct list_head *list, xa_lock(&swap_cache->i_pages); } + /* prevent PageLRU to go away from under us, and freeze lru stats */ + spin_lock(&pgdat->lru_lock); + + lruvec = mem_cgroup_page_lruvec(head, pgdat); + for (i = HPAGE_PMD_NR - 1; i >= 1; i--) { __split_huge_page_tail(head, i, lruvec, list); /* Some pages can be beyond i_size: drop them from page cache */ @@ -2440,6 +2443,8 @@ static void __split_huge_page(struct page *page, struct list_head *list, } ClearPageCompound(head); + spin_unlock(&pgdat->lru_lock); + /* Caller disabled irqs, so they are still disabled here */ split_page_owner(head, HPAGE_PMD_ORDER); @@ -2457,8 +2462,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, page_ref_add(head, 2); xa_unlock(&head->mapping->i_pages); } - - spin_unlock_irqrestore(&pgdat->lru_lock, flags); + local_irq_enable(); remap_page(head); @@ -2597,12 +2601,10 @@ bool can_split_huge_page(struct page *page, int *pextra_pins) int split_huge_page_to_list(struct page *page, struct list_head *list) { struct page *head = compound_head(page); - struct pglist_data *pgdata = NODE_DATA(page_to_nid(head)); struct deferred_split *ds_queue = get_deferred_split_queue(head); struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; int count, mapcount, extra_pins, ret; - unsigned long flags; pgoff_t end; VM_BUG_ON_PAGE(is_huge_zero_page(head), head); @@ -2663,9 +2665,8 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) unmap_page(head); VM_BUG_ON_PAGE(compound_mapcount(head), head); - /* prevent PageLRU to go away from under us, and freeze lru stats */ - spin_lock_irqsave(&pgdata->lru_lock, flags); - + /* block interrupt reentry in xa_lock and spinlock */ + local_irq_disable(); if (mapping) { XA_STATE(xas, &mapping->i_pages, page_index(head)); @@ -2695,7 +2696,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) __dec_node_page_state(head, NR_FILE_THPS); } - __split_huge_page(page, list, end, flags); + __split_huge_page(page, list, end); if (PageSwapCache(head)) { swp_entry_t entry = { .val = page_private(head) }; @@ -2714,7 +2715,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) spin_unlock(&ds_queue->split_queue_lock); fail: if (mapping) xa_unlock(&mapping->i_pages); - spin_unlock_irqrestore(&pgdata->lru_lock, flags); + local_irq_enable(); remap_page(head); ret = -EBUSY; }