From patchwork Wed Sep 4 13:53:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konstantin Khlebnikov X-Patchwork-Id: 11130341 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 63C331395 for ; Wed, 4 Sep 2019 13:53:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 26A8723401 for ; Wed, 4 Sep 2019 13:53:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=yandex-team.ru header.i=@yandex-team.ru header.b="t3HiszGl" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 26A8723401 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=yandex-team.ru Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 00C346B0006; Wed, 4 Sep 2019 09:53:16 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id F01AD6B0007; Wed, 4 Sep 2019 09:53:15 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DC8716B0008; Wed, 4 Sep 2019 09:53:15 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0200.hostedemail.com [216.40.44.200]) by kanga.kvack.org (Postfix) with ESMTP id B964D6B0006 for ; Wed, 4 Sep 2019 09:53:15 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 344A0824CA2E for ; Wed, 4 Sep 2019 13:53:15 +0000 (UTC) X-FDA: 75897379950.17.tree57_1c47c7dd5ec45 X-Spam-Summary: 2,0,0,9d6016944fa98bcb,d41d8cd98f00b204,khlebnikov@yandex-team.ru,::linux-kernel@vger.kernel.org:cgroups@vger.kernel.org:mhocko@suse.com:guro@fb.com:hannes@cmpxchg.org,RULES_HIT:41:69:152:355:379:800:960:973:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1535:1544:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3871:3872:4117:4321:5007:6119:6261:6653:8957:9207:9592:10004:11026:11473:11658:11914:12043:12114:12291:12296:12297:12438:12517:12519:12555:12683:12760:12986:13141:13230:14096:14097:14110:14181:14394:14687:14721:14922:21080:21324:21450:21451:21611:21627:30054:30070,0,RBL:95.108.205.193:@yandex-team.ru:.lbl8.mailshell.net-62.2.3.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:30,LUA_SUMMARY:none X-HE-Tag: tree57_1c47c7dd5ec45 X-Filterd-Recvd-Size: 6396 Received: from forwardcorp1o.mail.yandex.net (forwardcorp1o.mail.yandex.net [95.108.205.193]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Wed, 4 Sep 2019 13:53:14 +0000 (UTC) Received: from mxbackcorp1o.mail.yandex.net (mxbackcorp1o.mail.yandex.net [IPv6:2a02:6b8:0:1a2d::301]) by forwardcorp1o.mail.yandex.net (Yandex) with ESMTP id 7EDB42E1B27; Wed, 4 Sep 2019 16:53:11 +0300 (MSK) Received: from smtpcorp1o.mail.yandex.net (smtpcorp1o.mail.yandex.net [2a02:6b8:0:1a2d::30]) by mxbackcorp1o.mail.yandex.net (nwsmtp/Yandex) with ESMTP id mLwgbAJwYJ-rBN8VOE7; Wed, 04 Sep 2019 16:53:11 +0300 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru; s=default; t=1567605191; bh=lTB5juDz41QpnMFucOhR5p9H8tWOOdh3MISjDg6Qtyo=; h=In-Reply-To:Message-ID:References:Date:To:From:Subject:Cc; b=t3HiszGlnoa0G41kFf87kWHWUfY/hTsAHUKmF5h9fmsvsFu3c8o8tu/MlWtpddOFA tesrmKHUH/r2B0Pu47yB4Ay2NuVX3+tF/9dbtV51c1AkmtwGVs7gSzp/ehwzFqbFdU 6ycbxiL/tz7fm+BIVbdgZH5gdafWmJqj0BCqsuys= Authentication-Results: mxbackcorp1o.mail.yandex.net; dkim=pass header.i=@yandex-team.ru Received: from dynamic-red.dhcp.yndx.net (dynamic-red.dhcp.yndx.net [2a02:6b8:0:40c:c142:79c2:9d86:677a]) by smtpcorp1o.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id J9CNPMyWdF-rBfqrSjj; Wed, 04 Sep 2019 16:53:11 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client certificate not present) Subject: [PATCH v1 1/7] mm/memcontrol: move locking page out of mem_cgroup_move_account From: Konstantin Khlebnikov To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org Cc: Michal Hocko , Roman Gushchin , Johannes Weiner Date: Wed, 04 Sep 2019 16:53:10 +0300 Message-ID: <156760519049.6560.475471327815521193.stgit@buzz> In-Reply-To: <156760509382.6560.17364256340940314860.stgit@buzz> References: <156760509382.6560.17364256340940314860.stgit@buzz> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Required for calling mem_cgroup_move_account() for already locked page. Signed-off-by: Konstantin Khlebnikov --- mm/memcontrol.c | 64 +++++++++++++++++++++++++++---------------------------- 1 file changed, 31 insertions(+), 33 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 9ec5e12486a7..40ddc233e973 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5135,7 +5135,8 @@ static struct page *mc_handle_file_pte(struct vm_area_struct *vma, * @from: mem_cgroup which the page is moved from. * @to: mem_cgroup which the page is moved to. @from != @to. * - * The caller must make sure the page is not on LRU (isolate_page() is useful.) + * The caller must lock the page and make sure it is not on LRU + * (isolate_page() is useful.) * * This function doesn't do "charge" to new cgroup and doesn't do "uncharge" * from old cgroup. @@ -5147,24 +5148,15 @@ static int mem_cgroup_move_account(struct page *page, { unsigned long flags; unsigned int nr_pages = compound ? hpage_nr_pages(page) : 1; - int ret; bool anon; VM_BUG_ON(from == to); + VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(PageLRU(page), page); VM_BUG_ON(compound && !PageTransHuge(page)); - /* - * Prevent mem_cgroup_migrate() from looking at - * page->mem_cgroup of its source page while we change it. - */ - ret = -EBUSY; - if (!trylock_page(page)) - goto out; - - ret = -EINVAL; if (page->mem_cgroup != from) - goto out_unlock; + return -EINVAL; anon = PageAnon(page); @@ -5204,18 +5196,14 @@ static int mem_cgroup_move_account(struct page *page, page->mem_cgroup = to; spin_unlock_irqrestore(&from->move_lock, flags); - ret = 0; - local_irq_disable(); mem_cgroup_charge_statistics(to, page, compound, nr_pages); memcg_check_events(to, page); mem_cgroup_charge_statistics(from, page, compound, -nr_pages); memcg_check_events(from, page); local_irq_enable(); -out_unlock: - unlock_page(page); -out: - return ret; + + return 0; } /** @@ -5535,36 +5523,42 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd, struct vm_area_struct *vma = walk->vma; pte_t *pte; spinlock_t *ptl; - enum mc_target_type target_type; union mc_target target; struct page *page; ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { + bool device = false; + if (mc.precharge < HPAGE_PMD_NR) { spin_unlock(ptl); return 0; } - target_type = get_mctgt_type_thp(vma, addr, *pmd, &target); - if (target_type == MC_TARGET_PAGE) { - page = target.page; - if (!isolate_lru_page(page)) { - if (!mem_cgroup_move_account(page, true, - mc.from, mc.to)) { - mc.precharge -= HPAGE_PMD_NR; - mc.moved_charge += HPAGE_PMD_NR; - } - putback_lru_page(page); - } - put_page(page); - } else if (target_type == MC_TARGET_DEVICE) { + + switch (get_mctgt_type_thp(vma, addr, *pmd, &target)) { + case MC_TARGET_DEVICE: + device = true; + /* fall through */ + case MC_TARGET_PAGE: page = target.page; + if (!trylock_page(page)) + goto put_huge; + if (!device && isolate_lru_page(page)) + goto unlock_huge; if (!mem_cgroup_move_account(page, true, mc.from, mc.to)) { mc.precharge -= HPAGE_PMD_NR; mc.moved_charge += HPAGE_PMD_NR; } + if (!device) + putback_lru_page(page); +unlock_huge: + unlock_page(page); +put_huge: put_page(page); + break; + default: + break; } spin_unlock(ptl); return 0; @@ -5596,8 +5590,10 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd, */ if (PageTransCompound(page)) goto put; - if (!device && isolate_lru_page(page)) + if (!trylock_page(page)) goto put; + if (!device && isolate_lru_page(page)) + goto unlock; if (!mem_cgroup_move_account(page, false, mc.from, mc.to)) { mc.precharge--; @@ -5606,6 +5602,8 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd, } if (!device) putback_lru_page(page); +unlock: + unlock_page(page); put: /* get_mctgt_type() gets the page */ put_page(page); break; From patchwork Wed Sep 4 13:53:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konstantin Khlebnikov X-Patchwork-Id: 11130343 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 202BD14DE for ; Wed, 4 Sep 2019 13:53:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DF8AC21670 for ; Wed, 4 Sep 2019 13:53:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=yandex-team.ru header.i=@yandex-team.ru header.b="m3MD009m" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DF8AC21670 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=yandex-team.ru Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A6E6E6B0007; Wed, 4 Sep 2019 09:53:18 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9F5D06B0008; Wed, 4 Sep 2019 09:53:18 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 90FE96B000A; Wed, 4 Sep 2019 09:53:18 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0241.hostedemail.com [216.40.44.241]) by kanga.kvack.org (Postfix) with ESMTP id 6C4FC6B0007 for ; Wed, 4 Sep 2019 09:53:18 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 1131E688B for ; Wed, 4 Sep 2019 13:53:18 +0000 (UTC) X-FDA: 75897380076.15.seat10_1cb9b28036e21 X-Spam-Summary: 2,0,0,d8439a036ee9ee63,d41d8cd98f00b204,khlebnikov@yandex-team.ru,::linux-kernel@vger.kernel.org:cgroups@vger.kernel.org:mhocko@suse.com:guro@fb.com:hannes@cmpxchg.org,RULES_HIT:41:152:355:379:800:960:973:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1534:1542:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3353:3867:3868:3872:4321:4605:5007:6119:6261:6653:7903:8957:9207:10004:10400:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12683:12760:12986:14096:14097:14110:14181:14394:14687:14721:14922:21080:21450:21451:21627:30054,0,RBL:95.108.205.193:@yandex-team.ru:.lbl8.mailshell.net-62.2.3.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: seat10_1cb9b28036e21 X-Filterd-Recvd-Size: 4628 Received: from forwardcorp1o.mail.yandex.net (forwardcorp1o.mail.yandex.net [95.108.205.193]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Wed, 4 Sep 2019 13:53:17 +0000 (UTC) Received: from mxbackcorp1j.mail.yandex.net (mxbackcorp1j.mail.yandex.net [IPv6:2a02:6b8:0:1619::162]) by forwardcorp1o.mail.yandex.net (Yandex) with ESMTP id 4931F2E1AFB; Wed, 4 Sep 2019 16:53:13 +0300 (MSK) Received: from smtpcorp1j.mail.yandex.net (smtpcorp1j.mail.yandex.net [2a02:6b8:0:1619::137]) by mxbackcorp1j.mail.yandex.net (nwsmtp/Yandex) with ESMTP id g8uxin0UDo-rCWKDxqx; Wed, 04 Sep 2019 16:53:13 +0300 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru; s=default; t=1567605193; bh=Nk9h8il5OEPYVOEQt3XS8DDQ8EFzu/mVMbOut2gTstg=; h=In-Reply-To:Message-ID:References:Date:To:From:Subject:Cc; b=m3MD009msbTOFyJaC9thdsye5NnOnu3jYpUpiliIYoehZ9INDqRDGTHc3PrykUde1 A1kZlWlicjI2ukKTsXo140gJH7cRyMvTLE4Q8k1CcgCTxV5pRLuHfI7gL5gy9OioCz LjXyuLHHJSpAWh38qPV2sxMPPqkkYM5ftDK9lUwA= Authentication-Results: mxbackcorp1j.mail.yandex.net; dkim=pass header.i=@yandex-team.ru Received: from dynamic-red.dhcp.yndx.net (dynamic-red.dhcp.yndx.net [2a02:6b8:0:40c:c142:79c2:9d86:677a]) by smtpcorp1j.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id pnctDuk8zu-rC78k0tw; Wed, 04 Sep 2019 16:53:12 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client certificate not present) Subject: [PATCH v1 2/7] mm/memcontrol: add mem_cgroup_recharge From: Konstantin Khlebnikov To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org Cc: Michal Hocko , Roman Gushchin , Johannes Weiner Date: Wed, 04 Sep 2019 16:53:12 +0300 Message-ID: <156760519254.6560.3180815463616863318.stgit@buzz> In-Reply-To: <156760509382.6560.17364256340940314860.stgit@buzz> References: <156760509382.6560.17364256340940314860.stgit@buzz> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This function tries to move page into other cgroup. Caller must lock page and isolate it from LRU. Signed-off-by: Konstantin Khlebnikov --- include/linux/memcontrol.h | 9 +++++++++ mm/memcontrol.c | 40 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 49 insertions(+) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 2cd4359cb38c..d94950584f60 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -352,6 +352,8 @@ void mem_cgroup_uncharge(struct page *page); void mem_cgroup_uncharge_list(struct list_head *page_list); void mem_cgroup_migrate(struct page *oldpage, struct page *newpage); +int mem_cgroup_try_recharge(struct page *page, struct mm_struct *mm, + gfp_t gfp_mask); static struct mem_cgroup_per_node * mem_cgroup_nodeinfo(struct mem_cgroup *memcg, int nid) @@ -857,6 +859,13 @@ static inline void mem_cgroup_migrate(struct page *old, struct page *new) { } +static inline int mem_cgroup_try_recharge(struct page *page, + struct mm_struct *mm, + gfp_t gfp_mask) +{ + return 0; +} + static inline struct lruvec *mem_cgroup_lruvec(struct pglist_data *pgdat, struct mem_cgroup *memcg) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 40ddc233e973..953a0bbb9f43 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6507,6 +6507,46 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) local_irq_restore(flags); } +/* + * mem_cgroup_try_recharge - try to recharge page to mm's memcg. + * + * Page must be locked and isolated. + */ +int mem_cgroup_try_recharge(struct page *page, struct mm_struct *mm, + gfp_t gfp_mask) +{ + struct mem_cgroup *from, *to; + int nr_pages; + int err = 0; + + VM_BUG_ON_PAGE(!PageLocked(page), page); + VM_BUG_ON_PAGE(PageLRU(page), page); + + if (mem_cgroup_disabled()) + return 0; + + from = page->mem_cgroup; + to = get_mem_cgroup_from_mm(mm); + + if (likely(from == to) || !from) + goto out; + + nr_pages = hpage_nr_pages(page); + err = try_charge(to, gfp_mask, nr_pages); + if (err) + goto out; + + err = mem_cgroup_move_account(page, nr_pages > 1, from, to); + if (err) + cancel_charge(to, nr_pages); + else + cancel_charge(from, nr_pages); +out: + css_put(&to->css); + + return err; +} + DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key); EXPORT_SYMBOL(memcg_sockets_enabled_key); From patchwork Wed Sep 4 13:53:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konstantin Khlebnikov X-Patchwork-Id: 11130345 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 75DD114DE for ; Wed, 4 Sep 2019 13:53:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 358B52341E for ; Wed, 4 Sep 2019 13:53:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=yandex-team.ru header.i=@yandex-team.ru header.b="Eembe5Ls" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 358B52341E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=yandex-team.ru Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1E4D66B0008; Wed, 4 Sep 2019 09:53:20 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 148FF6B000A; Wed, 4 Sep 2019 09:53:20 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ED9536B000C; Wed, 4 Sep 2019 09:53:19 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0044.hostedemail.com [216.40.44.44]) by kanga.kvack.org (Postfix) with ESMTP id CE5A36B0008 for ; Wed, 4 Sep 2019 09:53:19 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 70FBD52B0 for ; Wed, 4 Sep 2019 13:53:19 +0000 (UTC) X-FDA: 75897380118.08.kick81_1cebb66dbb52a X-Spam-Summary: 2,0,0,4484ecd81dccf720,d41d8cd98f00b204,khlebnikov@yandex-team.ru,::linux-kernel@vger.kernel.org:cgroups@vger.kernel.org:mhocko@suse.com:guro@fb.com:hannes@cmpxchg.org,RULES_HIT:41:152:355:379:800:960:968:973:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1535:1544:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:2693:3138:3139:3140:3141:3142:3355:3865:3867:3868:3871:4117:4321:4605:5007:6261:6653:8957:9010:9207:10004:10226:11026:11658:11914:12043:12296:12297:12438:12517:12519:12555:12760:12986:14096:14097:14181:14394:14687:14721:14922:21080:21450:21451:21627:30003:30054:30070,0,RBL:77.88.29.217:@yandex-team.ru:.lbl8.mailshell.net-62.2.3.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:32,LUA_SUMMARY:none X-HE-Tag: kick81_1cebb66dbb52a X-Filterd-Recvd-Size: 6381 Received: from forwardcorp1p.mail.yandex.net (forwardcorp1p.mail.yandex.net [77.88.29.217]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Wed, 4 Sep 2019 13:53:18 +0000 (UTC) Received: from mxbackcorp1o.mail.yandex.net (mxbackcorp1o.mail.yandex.net [IPv6:2a02:6b8:0:1a2d::301]) by forwardcorp1p.mail.yandex.net (Yandex) with ESMTP id 53C752E160A; Wed, 4 Sep 2019 16:53:15 +0300 (MSK) Received: from smtpcorp1j.mail.yandex.net (smtpcorp1j.mail.yandex.net [2a02:6b8:0:1619::137]) by mxbackcorp1o.mail.yandex.net (nwsmtp/Yandex) with ESMTP id QfOvctjG4A-rEN8L11r; Wed, 04 Sep 2019 16:53:15 +0300 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru; s=default; t=1567605195; bh=FKXul/4983ZBoHHq4ipo38Vdx5JAQEjK+xyNcL5XG/o=; h=In-Reply-To:Message-ID:References:Date:To:From:Subject:Cc; b=Eembe5LspBdMt9h2FqQbMeEXSQ77B3oYjZdnfPSHj4lgjdTwEOxdK+7wZvEZxreve X11uBDqxc7NATKNpghw11W6j7IVoLAustrV0Jh7zA00z6OLgnDkLnbFBKIKcVvqAlr A0OlBRpBj/NZW06uM4G84V1qr5Tt3/5zJEk0rVkI= Authentication-Results: mxbackcorp1o.mail.yandex.net; dkim=pass header.i=@yandex-team.ru Received: from dynamic-red.dhcp.yndx.net (dynamic-red.dhcp.yndx.net [2a02:6b8:0:40c:c142:79c2:9d86:677a]) by smtpcorp1j.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id HV7s2MPeWC-rE7arYEF; Wed, 04 Sep 2019 16:53:14 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client certificate not present) Subject: [PATCH v1 3/7] mm/mlock: add vma argument for mlock_vma_page() From: Konstantin Khlebnikov To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org Cc: Michal Hocko , Roman Gushchin , Johannes Weiner Date: Wed, 04 Sep 2019 16:53:14 +0300 Message-ID: <156760519431.6560.14636274673495137289.stgit@buzz> In-Reply-To: <156760509382.6560.17364256340940314860.stgit@buzz> References: <156760509382.6560.17364256340940314860.stgit@buzz> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This will be used for recharging memory cgroup accounting. Signed-off-by: Konstantin Khlebnikov --- mm/gup.c | 2 +- mm/huge_memory.c | 4 ++-- mm/internal.h | 4 ++-- mm/ksm.c | 2 +- mm/migrate.c | 2 +- mm/mlock.c | 2 +- mm/rmap.c | 2 +- 7 files changed, 9 insertions(+), 9 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 98f13ab37bac..f0accc229266 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -306,7 +306,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, * know the page is still mapped, we don't even * need to check for file-cache page truncation. */ - mlock_vma_page(page); + mlock_vma_page(vma, page); unlock_page(page); } } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index de1f15969e27..157faa231e26 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1513,7 +1513,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, goto skip_mlock; lru_add_drain(); if (page->mapping && !PageDoubleMap(page)) - mlock_vma_page(page); + mlock_vma_page(vma, page); unlock_page(page); } skip_mlock: @@ -3009,7 +3009,7 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new) page_add_file_rmap(new, true); set_pmd_at(mm, mmun_start, pvmw->pmd, pmde); if ((vma->vm_flags & VM_LOCKED) && !PageDoubleMap(new)) - mlock_vma_page(new); + mlock_vma_page(vma, new); update_mmu_cache_pmd(vma, address, pvmw->pmd); } #endif diff --git a/mm/internal.h b/mm/internal.h index e32390802fd3..9f91992ef281 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -305,7 +305,7 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma) /* * must be called with vma's mmap_sem held for read or write, and page locked. */ -extern void mlock_vma_page(struct page *page); +extern void mlock_vma_page(struct vm_area_struct *vma, struct page *page); extern unsigned int munlock_vma_page(struct page *page); /* @@ -364,7 +364,7 @@ vma_address(struct page *page, struct vm_area_struct *vma) #else /* !CONFIG_MMU */ static inline void clear_page_mlock(struct page *page) { } -static inline void mlock_vma_page(struct page *page) { } +static inline void mlock_vma_page(struct vm_area_struct *, struct page *) { } static inline void mlock_migrate_page(struct page *new, struct page *old) { } #endif /* !CONFIG_MMU */ diff --git a/mm/ksm.c b/mm/ksm.c index 3dc4346411e4..cb5705d6f26c 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1274,7 +1274,7 @@ static int try_to_merge_one_page(struct vm_area_struct *vma, if (!PageMlocked(kpage)) { unlock_page(page); lock_page(kpage); - mlock_vma_page(kpage); + mlock_vma_page(vma, kpage); page = kpage; /* for final unlock */ } } diff --git a/mm/migrate.c b/mm/migrate.c index a42858d8e00b..1f6151cb7310 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -269,7 +269,7 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, page_add_file_rmap(new, false); } if (vma->vm_flags & VM_LOCKED && !PageTransCompound(new)) - mlock_vma_page(new); + mlock_vma_page(vma, new); if (PageTransHuge(page) && PageMlocked(page)) clear_page_mlock(page); diff --git a/mm/mlock.c b/mm/mlock.c index a90099da4fb4..73d477aaa411 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -85,7 +85,7 @@ void clear_page_mlock(struct page *page) * Mark page as mlocked if not already. * If page on LRU, isolate and putback to move to unevictable list. */ -void mlock_vma_page(struct page *page) +void mlock_vma_page(struct vm_area_struct *vma, struct page *page) { /* Serialize with page migration */ BUG_ON(!PageLocked(page)); diff --git a/mm/rmap.c b/mm/rmap.c index 003377e24232..de88f4897c1d 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1410,7 +1410,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, * Holding pte lock, we do *not* need * mmap_sem here */ - mlock_vma_page(page); + mlock_vma_page(vma, page); } ret = false; page_vma_mapped_walk_done(&pvmw); From patchwork Wed Sep 4 13:53:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konstantin Khlebnikov X-Patchwork-Id: 11130347 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BA8671395 for ; Wed, 4 Sep 2019 13:53:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 865BE2341F for ; Wed, 4 Sep 2019 13:53:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=yandex-team.ru header.i=@yandex-team.ru header.b="GSPWrkxb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 865BE2341F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=yandex-team.ru Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E42066B000A; Wed, 4 Sep 2019 09:53:20 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DCC196B000C; Wed, 4 Sep 2019 09:53:20 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C68086B000D; Wed, 4 Sep 2019 09:53:20 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0226.hostedemail.com [216.40.44.226]) by kanga.kvack.org (Postfix) with ESMTP id A0DE76B000A for ; Wed, 4 Sep 2019 09:53:20 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 0EB82998B for ; Wed, 4 Sep 2019 13:53:20 +0000 (UTC) X-FDA: 75897380160.25.songs70_1d07f06d7e63b X-Spam-Summary: 2,0,0,7d44a4df8d67efe9,d41d8cd98f00b204,khlebnikov@yandex-team.ru,::linux-kernel@vger.kernel.org:cgroups@vger.kernel.org:mhocko@suse.com:guro@fb.com:hannes@cmpxchg.org,RULES_HIT:41:152:355:379:800:960:973:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1534:1542:1593:1594:1711:1730:1747:1777:1792:1801:2393:2553:2559:2562:2693:2914:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3873:3874:4250:4321:4605:5007:6119:6261:6653:7875:7903:7974:8603:8660:8829:8957:9207:10004:10400:10450:10455:11026:11232:11473:11658:11914:12043:12297:12438:12517:12519:12555:12679:12760:12986:13148:13230:14096:14097:14181:14394:14687:14721:14922:19904:19999:21080:21222:21450:21451:21627:21795:30051:30054:30090,0,RBL:95.108.205.193:@yandex-team.ru:.lbl8.mailshell.net-62.2.3.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none X-HE-Tag: songs70_1d07f06d7e63b X-Filterd-Recvd-Size: 4800 Received: from forwardcorp1o.mail.yandex.net (forwardcorp1o.mail.yandex.net [95.108.205.193]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Wed, 4 Sep 2019 13:53:19 +0000 (UTC) Received: from mxbackcorp1g.mail.yandex.net (mxbackcorp1g.mail.yandex.net [IPv6:2a02:6b8:0:1402::301]) by forwardcorp1o.mail.yandex.net (Yandex) with ESMTP id 4B2552E1B27; Wed, 4 Sep 2019 16:53:17 +0300 (MSK) Received: from smtpcorp1o.mail.yandex.net (smtpcorp1o.mail.yandex.net [2a02:6b8:0:1a2d::30]) by mxbackcorp1g.mail.yandex.net (nwsmtp/Yandex) with ESMTP id TeRZbfHkYc-rGCqWrsv; Wed, 04 Sep 2019 16:53:17 +0300 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru; s=default; t=1567605197; bh=VxqnRtVdrBaKi5aYplTaVoduEG2sU6I/aWRnOnGbhO0=; h=In-Reply-To:Message-ID:References:Date:To:From:Subject:Cc; b=GSPWrkxb670ZPlW2DXL3qhwLT3Fup/xjYJ6vwF3G2Qai3g/SrzfIXNRgAwcvm0/Ti A18Dsr8a3PSVV8YxTvU8h7v8qAAvrmvPWi7QMygAqAkIhFT6IgIc0TxiiBlmoiaTNw PD0QCzoazULCxKs2DLAfXEzQCa30nglUzzQFLjvs= Authentication-Results: mxbackcorp1g.mail.yandex.net; dkim=pass header.i=@yandex-team.ru Received: from dynamic-red.dhcp.yndx.net (dynamic-red.dhcp.yndx.net [2a02:6b8:0:40c:c142:79c2:9d86:677a]) by smtpcorp1o.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id 6QVRv2aEFs-rGf4XfSK; Wed, 04 Sep 2019 16:53:16 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client certificate not present) Subject: [PATCH v1 4/7] mm/mlock: recharge memory accounting to first mlock user From: Konstantin Khlebnikov To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org Cc: Michal Hocko , Roman Gushchin , Johannes Weiner Date: Wed, 04 Sep 2019 16:53:16 +0300 Message-ID: <156760519646.6560.5927254238728419748.stgit@buzz> In-Reply-To: <156760509382.6560.17364256340940314860.stgit@buzz> References: <156760509382.6560.17364256340940314860.stgit@buzz> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently mlock keeps pages in cgroups where they were accounted. This way one container could affect another if they share file cache. Typical case is writing (downloading) file in one container and then locking in another. After that first container cannot get rid of file. This patch recharges accounting to cgroup which owns mm for mlock vma. Recharging happens at first mlock, when PageMlocked is set. Mlock moves pages into unevictable LRU under pte lock thus in this place we cannot call reclaimer. To keep things simple just charge using force. After that memory usage temporary could be higher than limit but cgroup will reclaim memory later or trigger oom, which is valid outcome when somebody mlock too much. Signed-off-by: Konstantin Khlebnikov --- Documentation/admin-guide/cgroup-v1/memory.rst | 5 +++++ mm/mlock.c | 9 ++++++++- 2 files changed, 13 insertions(+), 1 deletion(-) diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst index 41bdc038dad9..4c79e5a9153b 100644 --- a/Documentation/admin-guide/cgroup-v1/memory.rst +++ b/Documentation/admin-guide/cgroup-v1/memory.rst @@ -220,6 +220,11 @@ the cgroup that brought it in -- this will happen on memory pressure). But see section 8.2: when moving a task to another cgroup, its pages may be recharged to the new cgroup, if move_charge_at_immigrate has been chosen. +Locking pages in memory with mlock() or mmap(MAP_LOCKED) recharges pages +into current memory cgroup. This recharge ignores memory limit thus memory +usage could temporary become higher than limit. After that any allocation +will reclaim memory down to limit or trigger oom if mlock size does not fit. + Exception: If CONFIG_MEMCG_SWAP is not used. When you do swapoff and make swapped-out pages of shmem(tmpfs) to be backed into memory in force, charges for pages are accounted against the diff --git a/mm/mlock.c b/mm/mlock.c index 73d477aaa411..68f068711203 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -97,8 +97,15 @@ void mlock_vma_page(struct vm_area_struct *vma, struct page *page) mod_zone_page_state(page_zone(page), NR_MLOCK, hpage_nr_pages(page)); count_vm_event(UNEVICTABLE_PGMLOCKED); - if (!isolate_lru_page(page)) + if (!isolate_lru_page(page)) { + /* + * Force memory recharge to mlock user. Cannot + * reclaim memory because called under pte lock. + */ + mem_cgroup_try_recharge(page, vma->vm_mm, + GFP_NOWAIT | __GFP_NOFAIL); putback_lru_page(page); + } } } From patchwork Wed Sep 4 13:53:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konstantin Khlebnikov X-Patchwork-Id: 11130349 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3FF4314DE for ; Wed, 4 Sep 2019 13:53:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0043922CED for ; Wed, 4 Sep 2019 13:53:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=yandex-team.ru header.i=@yandex-team.ru header.b="fl847K3k" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0043922CED Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=yandex-team.ru Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7121D6B000C; Wed, 4 Sep 2019 09:53:24 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6C32A6B000D; Wed, 4 Sep 2019 09:53:24 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5B5296B000E; Wed, 4 Sep 2019 09:53:24 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0200.hostedemail.com [216.40.44.200]) by kanga.kvack.org (Postfix) with ESMTP id 31CAE6B000C for ; Wed, 4 Sep 2019 09:53:24 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id C167E4837 for ; Wed, 4 Sep 2019 13:53:23 +0000 (UTC) X-FDA: 75897380286.21.geese86_1d8f5d8801f08 X-Spam-Summary: 2,0,0,fa54b2d3b5693713,d41d8cd98f00b204,khlebnikov@yandex-team.ru,::linux-kernel@vger.kernel.org:cgroups@vger.kernel.org:mhocko@suse.com:guro@fb.com:hannes@cmpxchg.org,RULES_HIT:2:41:152:355:379:800:960:966:968:973:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1535:1593:1594:1605:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3865:3867:3868:3871:3872:4049:4119:4250:4321:4385:4605:5007:6119:6261:6653:7903:8957:9010:9207:10004:10226:11026:11232:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12760:12986:13255:14096:14097:14394:14687:14922:21080:21451:21627:21795:30003:30051:30054:30070,0,RBL:77.88.29.217:@yandex-team.ru:.lbl8.mailshell.net-62.2.3.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:36,LUA_SUMMARY:none X-HE-Tag: geese86_1d8f5d8801f08 X-Filterd-Recvd-Size: 8571 Received: from forwardcorp1p.mail.yandex.net (forwardcorp1p.mail.yandex.net [77.88.29.217]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Wed, 4 Sep 2019 13:53:23 +0000 (UTC) Received: from mxbackcorp2j.mail.yandex.net (mxbackcorp2j.mail.yandex.net [IPv6:2a02:6b8:0:1619::119]) by forwardcorp1p.mail.yandex.net (Yandex) with ESMTP id 51B542E15F8; Wed, 4 Sep 2019 16:53:19 +0300 (MSK) Received: from smtpcorp1j.mail.yandex.net (smtpcorp1j.mail.yandex.net [2a02:6b8:0:1619::137]) by mxbackcorp2j.mail.yandex.net (nwsmtp/Yandex) with ESMTP id LqBQ2fs3tf-rIBGUSHB; Wed, 04 Sep 2019 16:53:19 +0300 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru; s=default; t=1567605199; bh=aH0EBATL6Xdm91KGXCuuSC/Nf1lvu8cwkbbLcyujFm0=; h=In-Reply-To:Message-ID:References:Date:To:From:Subject:Cc; b=fl847K3kjZFsvf9ONh0pvu4WnPP37n+rKajQUOJ0/PjF4b3vEk3MdbPjeEmJr8J66 KH1pfejCfcbFVablXxrm59tJ7JJDWv3UVqKdOqcXd8uTnFHj7C/vtv5xmIxci1o6vF i1Rr9GHG+KHBPr/DwVSsMnVS4Pug6xmm41CqF96g= Authentication-Results: mxbackcorp2j.mail.yandex.net; dkim=pass header.i=@yandex-team.ru Received: from dynamic-red.dhcp.yndx.net (dynamic-red.dhcp.yndx.net [2a02:6b8:0:40c:c142:79c2:9d86:677a]) by smtpcorp1j.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id Xi3vFfobyT-rI7uAiKM; Wed, 04 Sep 2019 16:53:18 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client certificate not present) Subject: [PATCH v1 5/7] mm/mlock: recharge memory accounting to second mlock user at munlock From: Konstantin Khlebnikov To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org Cc: Michal Hocko , Roman Gushchin , Johannes Weiner Date: Wed, 04 Sep 2019 16:53:18 +0300 Message-ID: <156760519844.6560.1129059727979832602.stgit@buzz> In-Reply-To: <156760509382.6560.17364256340940314860.stgit@buzz> References: <156760509382.6560.17364256340940314860.stgit@buzz> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Munlock isolates page from LRU and then looks for another mlock vma. Thus we could could rechange page to second mlock without isolating. This patch adds argument 'isolated' to mlock_vma_page() and passes this flag through try_to_ummap as TTU_LRU_ISOLATED. Signed-off-by: Konstantin Khlebnikov --- include/linux/rmap.h | 3 ++- mm/gup.c | 2 +- mm/huge_memory.c | 4 ++-- mm/internal.h | 6 ++++-- mm/ksm.c | 2 +- mm/migrate.c | 2 +- mm/mlock.c | 21 ++++++++++++--------- mm/rmap.c | 5 +++-- 8 files changed, 26 insertions(+), 19 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 988d176472df..4552716ac3da 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -98,7 +98,8 @@ enum ttu_flags { * do a final flush if necessary */ TTU_RMAP_LOCKED = 0x80, /* do not grab rmap lock: * caller holds it */ - TTU_SPLIT_FREEZE = 0x100, /* freeze pte under splitting thp */ + TTU_SPLIT_FREEZE = 0x100, /* freeze pte under splitting thp */ + TTU_LRU_ISOLATED = 0x200, /* caller isolated page from LRU */ }; #ifdef CONFIG_MMU diff --git a/mm/gup.c b/mm/gup.c index f0accc229266..e0784e9022fe 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -306,7 +306,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, * know the page is still mapped, we don't even * need to check for file-cache page truncation. */ - mlock_vma_page(vma, page); + mlock_vma_page(vma, page, false); unlock_page(page); } } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 157faa231e26..7822997d765c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1513,7 +1513,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, goto skip_mlock; lru_add_drain(); if (page->mapping && !PageDoubleMap(page)) - mlock_vma_page(vma, page); + mlock_vma_page(vma, page, false); unlock_page(page); } skip_mlock: @@ -3009,7 +3009,7 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new) page_add_file_rmap(new, true); set_pmd_at(mm, mmun_start, pvmw->pmd, pmde); if ((vma->vm_flags & VM_LOCKED) && !PageDoubleMap(new)) - mlock_vma_page(vma, new); + mlock_vma_page(vma, new, false); update_mmu_cache_pmd(vma, address, pvmw->pmd); } #endif diff --git a/mm/internal.h b/mm/internal.h index 9f91992ef281..1639fb581496 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -305,7 +305,8 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma) /* * must be called with vma's mmap_sem held for read or write, and page locked. */ -extern void mlock_vma_page(struct vm_area_struct *vma, struct page *page); +extern void mlock_vma_page(struct vm_area_struct *vma, struct page *page, + bool isolated); extern unsigned int munlock_vma_page(struct page *page); /* @@ -364,7 +365,8 @@ vma_address(struct page *page, struct vm_area_struct *vma) #else /* !CONFIG_MMU */ static inline void clear_page_mlock(struct page *page) { } -static inline void mlock_vma_page(struct vm_area_struct *, struct page *) { } +static inline void mlock_vma_page(struct vm_area_struct *vma, + struct page *page, bool isolated) { } static inline void mlock_migrate_page(struct page *new, struct page *old) { } #endif /* !CONFIG_MMU */ diff --git a/mm/ksm.c b/mm/ksm.c index cb5705d6f26c..bf2a748c5e64 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1274,7 +1274,7 @@ static int try_to_merge_one_page(struct vm_area_struct *vma, if (!PageMlocked(kpage)) { unlock_page(page); lock_page(kpage); - mlock_vma_page(vma, kpage); + mlock_vma_page(vma, kpage, false); page = kpage; /* for final unlock */ } } diff --git a/mm/migrate.c b/mm/migrate.c index 1f6151cb7310..c13256ddc063 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -269,7 +269,7 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, page_add_file_rmap(new, false); } if (vma->vm_flags & VM_LOCKED && !PageTransCompound(new)) - mlock_vma_page(vma, new); + mlock_vma_page(vma, new, false); if (PageTransHuge(page) && PageMlocked(page)) clear_page_mlock(page); diff --git a/mm/mlock.c b/mm/mlock.c index 68f068711203..07a2ab4d6a6c 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -85,7 +85,8 @@ void clear_page_mlock(struct page *page) * Mark page as mlocked if not already. * If page on LRU, isolate and putback to move to unevictable list. */ -void mlock_vma_page(struct vm_area_struct *vma, struct page *page) +void mlock_vma_page(struct vm_area_struct *vma, struct page *page, + bool isolated) { /* Serialize with page migration */ BUG_ON(!PageLocked(page)); @@ -97,15 +98,17 @@ void mlock_vma_page(struct vm_area_struct *vma, struct page *page) mod_zone_page_state(page_zone(page), NR_MLOCK, hpage_nr_pages(page)); count_vm_event(UNEVICTABLE_PGMLOCKED); - if (!isolate_lru_page(page)) { - /* - * Force memory recharge to mlock user. Cannot - * reclaim memory because called under pte lock. - */ - mem_cgroup_try_recharge(page, vma->vm_mm, - GFP_NOWAIT | __GFP_NOFAIL); + + if (!isolated && isolate_lru_page(page)) + return; + /* + * Force memory recharge to mlock user. + * Cannot reclaim memory because called under pte lock. + */ + mem_cgroup_try_recharge(page, vma->vm_mm, + GFP_NOWAIT | __GFP_NOFAIL); + if (!isolated) putback_lru_page(page); - } } } diff --git a/mm/rmap.c b/mm/rmap.c index de88f4897c1d..0b21b27f3519 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1410,7 +1410,8 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, * Holding pte lock, we do *not* need * mmap_sem here */ - mlock_vma_page(vma, page); + mlock_vma_page(vma, page, + !!(flags & TTU_LRU_ISOLATED)); } ret = false; page_vma_mapped_walk_done(&pvmw); @@ -1752,7 +1753,7 @@ void try_to_munlock(struct page *page) { struct rmap_walk_control rwc = { .rmap_one = try_to_unmap_one, - .arg = (void *)TTU_MUNLOCK, + .arg = (void *)(TTU_MUNLOCK | TTU_LRU_ISOLATED), .done = page_not_mapped, .anon_lock = page_lock_anon_vma_read, From patchwork Wed Sep 4 13:53:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konstantin Khlebnikov X-Patchwork-Id: 11130351 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 760DB14DE for ; Wed, 4 Sep 2019 13:53:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 42D3E23401 for ; Wed, 4 Sep 2019 13:53:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=yandex-team.ru header.i=@yandex-team.ru header.b="zzXt8J7r" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 42D3E23401 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=yandex-team.ru Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 986BB6B000D; Wed, 4 Sep 2019 09:53:25 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8C2956B000E; Wed, 4 Sep 2019 09:53:25 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 787CD6B0010; Wed, 4 Sep 2019 09:53:25 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0108.hostedemail.com [216.40.44.108]) by kanga.kvack.org (Postfix) with ESMTP id 584FF6B000D for ; Wed, 4 Sep 2019 09:53:25 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id CC78E181AC9BF for ; Wed, 4 Sep 2019 13:53:24 +0000 (UTC) X-FDA: 75897380328.26.basin78_1db8fde86bf41 X-Spam-Summary: 2,0,0,4f42c0e9ee3ef55b,d41d8cd98f00b204,khlebnikov@yandex-team.ru,::linux-kernel@vger.kernel.org:cgroups@vger.kernel.org:mhocko@suse.com:guro@fb.com:hannes@cmpxchg.org,RULES_HIT:41:152:355:379:800:960:966:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1534:1542:1593:1594:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2898:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3870:3871:3874:4250:4321:4385:5007:6261:6653:7903:8957:9010:9207:10004:10400:10450:10455:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12760:12986:14181:14394:14687:14721:14922:19904:19999:21080:21451:21627:30034:30054:30070,0,RBL:77.88.29.217:@yandex-team.ru:.lbl8.mailshell.net-62.2.3.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:2,LUA_SUMMARY:none X-HE-Tag: basin78_1db8fde86bf41 X-Filterd-Recvd-Size: 4633 Received: from forwardcorp1p.mail.yandex.net (forwardcorp1p.mail.yandex.net [77.88.29.217]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Wed, 4 Sep 2019 13:53:24 +0000 (UTC) Received: from mxbackcorp2j.mail.yandex.net (mxbackcorp2j.mail.yandex.net [IPv6:2a02:6b8:0:1619::119]) by forwardcorp1p.mail.yandex.net (Yandex) with ESMTP id 5988D2E1609; Wed, 4 Sep 2019 16:53:21 +0300 (MSK) Received: from smtpcorp1p.mail.yandex.net (smtpcorp1p.mail.yandex.net [2a02:6b8:0:1472:2741:0:8b6:10]) by mxbackcorp2j.mail.yandex.net (nwsmtp/Yandex) with ESMTP id TNGVWv96P7-rLBaIhm4; Wed, 04 Sep 2019 16:53:21 +0300 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru; s=default; t=1567605201; bh=93aHSjkq8usOa7Vvzbeiada0aVjHgSRCpmsa9gKiW1o=; h=In-Reply-To:Message-ID:References:Date:To:From:Subject:Cc; b=zzXt8J7rWoER/p7/CTRZyY4XofxSlIu8HZMnkuI26LqBHm9vtziabp5zDNkU+/nSK c/l/B6DrJA6UYOrifB6EdGmZDmJyTT+rS7G3KaxXLzZmYHhAaqT25Kfy50FMFKtzw8 fbSc9u6sHHHBL9pFhN7EpNgbMXo4W4Y/SDtZW4X4= Authentication-Results: mxbackcorp2j.mail.yandex.net; dkim=pass header.i=@yandex-team.ru Received: from dynamic-red.dhcp.yndx.net (dynamic-red.dhcp.yndx.net [2a02:6b8:0:40c:c142:79c2:9d86:677a]) by smtpcorp1p.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id LsxziF0vbN-rKD0umIh; Wed, 04 Sep 2019 16:53:20 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client certificate not present) Subject: [PATCH v1 6/7] mm/vmscan: allow changing page memory cgroup during reclaim From: Konstantin Khlebnikov To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org Cc: Michal Hocko , Roman Gushchin , Johannes Weiner Date: Wed, 04 Sep 2019 16:53:20 +0300 Message-ID: <156760520035.6560.17483443614564028347.stgit@buzz> In-Reply-To: <156760509382.6560.17364256340940314860.stgit@buzz> References: <156760509382.6560.17364256340940314860.stgit@buzz> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: All LRU lists in one numa node are protected with one spin-lock and right now move_pages_to_lru() re-evaluates lruvec for each page. This allows to change page cgroup while page is isolated by reclaimer, but nobody use that for now. This patch makes this feature clear and passes into move_pages_to_lru pgdat rather than lruvec pointer. Signed-off-by: Konstantin Khlebnikov --- mm/vmscan.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index a6c5d0b28321..bf7a05e8a717 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1873,15 +1873,15 @@ static int too_many_isolated(struct pglist_data *pgdat, int file, * The downside is that we have to touch page->_refcount against each page. * But we had to alter page->flags anyway. * - * Returns the number of pages moved to the given lruvec. + * Returns the number of pages moved to LRU lists. */ -static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, +static unsigned noinline_for_stack move_pages_to_lru(struct pglist_data *pgdat, struct list_head *list) { - struct pglist_data *pgdat = lruvec_pgdat(lruvec); int nr_pages, nr_moved = 0; LIST_HEAD(pages_to_free); + struct lruvec *lruvec; struct page *page; enum lru_list lru; @@ -1895,6 +1895,8 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, spin_lock_irq(&pgdat->lru_lock); continue; } + + /* Re-evaluate lru: isolated page could be moved */ lruvec = mem_cgroup_page_lruvec(page, pgdat); SetPageLRU(page); @@ -2005,7 +2007,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec, reclaim_stat->recent_rotated[0] += stat.nr_activate[0]; reclaim_stat->recent_rotated[1] += stat.nr_activate[1]; - move_pages_to_lru(lruvec, &page_list); + move_pages_to_lru(pgdat, &page_list); __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); @@ -2128,8 +2130,8 @@ static void shrink_active_list(unsigned long nr_to_scan, */ reclaim_stat->recent_rotated[file] += nr_rotated; - nr_activate = move_pages_to_lru(lruvec, &l_active); - nr_deactivate = move_pages_to_lru(lruvec, &l_inactive); + nr_activate = move_pages_to_lru(pgdat, &l_active); + nr_deactivate = move_pages_to_lru(pgdat, &l_inactive); /* Keep all free pages in l_active list */ list_splice(&l_inactive, &l_active); From patchwork Wed Sep 4 13:53:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konstantin Khlebnikov X-Patchwork-Id: 11130353 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 73E711395 for ; Wed, 4 Sep 2019 13:53:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 374FB23401 for ; Wed, 4 Sep 2019 13:53:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=yandex-team.ru header.i=@yandex-team.ru header.b="Z4jm5Q2L" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 374FB23401 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=yandex-team.ru Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3426F6B000E; Wed, 4 Sep 2019 09:53:27 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2F25C6B0010; Wed, 4 Sep 2019 09:53:27 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 084096B0266; Wed, 4 Sep 2019 09:53:27 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0027.hostedemail.com [216.40.44.27]) by kanga.kvack.org (Postfix) with ESMTP id D904D6B000E for ; Wed, 4 Sep 2019 09:53:26 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 7DC73180AD802 for ; Wed, 4 Sep 2019 13:53:26 +0000 (UTC) X-FDA: 75897380412.18.mouth01_1dfcd9e79b751 X-Spam-Summary: 2,0,0,51c33792779d630e,d41d8cd98f00b204,khlebnikov@yandex-team.ru,::linux-kernel@vger.kernel.org:cgroups@vger.kernel.org:mhocko@suse.com:guro@fb.com:hannes@cmpxchg.org,RULES_HIT:41:152:355:379:800:960:966:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1534:1541:1593:1594:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3352:3866:3867:3868:3870:3872:4250:4385:4605:5007:6119:6261:6653:7901:7903:8957:9207:10004:10400:10450:10455:11026:11233:11473:11658:11914:12043:12296:12297:12517:12519:12555:12760:13069:13161:13229:13311:13357:14096:14097:14181:14394:14687:14721:14922:19904:19999:21080:21451:21627:30054:30070,0,RBL:95.108.205.193:@yandex-team.ru:.lbl8.mailshell.net-62.2.3.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: mouth01_1dfcd9e79b751 X-Filterd-Recvd-Size: 3282 Received: from forwardcorp1o.mail.yandex.net (forwardcorp1o.mail.yandex.net [95.108.205.193]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Wed, 4 Sep 2019 13:53:25 +0000 (UTC) Received: from mxbackcorp1g.mail.yandex.net (mxbackcorp1g.mail.yandex.net [IPv6:2a02:6b8:0:1402::301]) by forwardcorp1o.mail.yandex.net (Yandex) with ESMTP id 182CE2E1AF9; Wed, 4 Sep 2019 16:53:23 +0300 (MSK) Received: from smtpcorp1p.mail.yandex.net (smtpcorp1p.mail.yandex.net [2a02:6b8:0:1472:2741:0:8b6:10]) by mxbackcorp1g.mail.yandex.net (nwsmtp/Yandex) with ESMTP id aEmyyOzLzc-rMC8F2bH; Wed, 04 Sep 2019 16:53:23 +0300 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru; s=default; t=1567605203; bh=tOkluSvMOilQtaQgeVN/jbLnmLXdvX3Fp9s5Z6P8s1g=; h=In-Reply-To:Message-ID:References:Date:To:From:Subject:Cc; b=Z4jm5Q2LTUYVSo8olrY0Quoi7AQsTBhZjF73yqtYmYod1nau/phdgy6bQwtVIC7rj 788ePHQp13uvQW1vUeIf1xWUaY30sg+qS9qy1od1dkNufOcq1HIh0Z6xLrECdoAQoF GlBmDVdNCEbC2tjP/Z03EJTXlxSojcqYxr6uTv9Y= Authentication-Results: mxbackcorp1g.mail.yandex.net; dkim=pass header.i=@yandex-team.ru Received: from dynamic-red.dhcp.yndx.net (dynamic-red.dhcp.yndx.net [2a02:6b8:0:40c:c142:79c2:9d86:677a]) by smtpcorp1p.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id wlKkVkryLz-rMDao1Sr; Wed, 04 Sep 2019 16:53:22 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client certificate not present) Subject: [PATCH v1 7/7] mm/mlock: recharge mlocked pages at culling by vmscan From: Konstantin Khlebnikov To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org Cc: Michal Hocko , Roman Gushchin , Johannes Weiner Date: Wed, 04 Sep 2019 16:53:22 +0300 Message-ID: <156760520240.6560.4933207338618527335.stgit@buzz> In-Reply-To: <156760509382.6560.17364256340940314860.stgit@buzz> References: <156760509382.6560.17364256340940314860.stgit@buzz> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If mlock cannot catch page in LRU then it isn't moved into unevictable lru. These pages are 'culled' by reclaimer and moved into unevictable lru. It seems pages locked with MLOCK_ONFAULT always go through this path. Reclaimer calls try_to_unmap for already isolated pages, thus on this path we could freely change page to owner of any mlock vma we found in rmap. This patch passes flag TTU_LRU_ISOLATED into try_to_ummap to move pages in mmlock_vma_page(). Signed-off-by: Konstantin Khlebnikov --- mm/vmscan.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index bf7a05e8a717..2060f254dd6b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1345,7 +1345,8 @@ static unsigned long shrink_page_list(struct list_head *page_list, * processes. Try to unmap it here. */ if (page_mapped(page)) { - enum ttu_flags flags = ttu_flags | TTU_BATCH_FLUSH; + enum ttu_flags flags = ttu_flags | TTU_BATCH_FLUSH | + TTU_LRU_ISOLATED; if (unlikely(PageTransHuge(page))) flags |= TTU_SPLIT_HUGE_PMD;