From patchwork Wed Nov 20 16:58:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 11254537 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9E0C91390 for ; Wed, 20 Nov 2019 16:58:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4F9D02071F for ; Wed, 20 Nov 2019 16:58:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=cmpxchg-org.20150623.gappssmtp.com header.i=@cmpxchg-org.20150623.gappssmtp.com header.b="JWQOya2U" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4F9D02071F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E2B616B028F; Wed, 20 Nov 2019 11:58:51 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DDBCE6B0290; Wed, 20 Nov 2019 11:58:51 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CCA4C6B0291; Wed, 20 Nov 2019 11:58:51 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0134.hostedemail.com [216.40.44.134]) by kanga.kvack.org (Postfix) with ESMTP id B45D26B028F for ; Wed, 20 Nov 2019 11:58:51 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 6CA9F181AEF1E for ; Wed, 20 Nov 2019 16:58:51 +0000 (UTC) X-FDA: 76177265262.11.tree53_1302f4d63393b X-Spam-Summary: 2,0,0,b6b1cb076154eb65,d41d8cd98f00b204,hannes@cmpxchg.org,:akpm@linux-foundation.org:hughd@google.com:shakeelb@google.com:mhocko@suse.com:alex.shi@linux.alibaba.com:guro@fb.com::cgroups@vger.kernel.org:linux-kernel@vger.kernel.org:kernel-team@fb.com,RULES_HIT:1:2:41:69:355:379:541:800:960:966:968:973:988:989:1260:1311:1314:1345:1437:1515:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2731:2899:2903:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4050:4250:4321:4385:4605:5007:6261:6653:7903:8957:9592:10004:11026:11232:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13161:13208:13221:13229:13894:14394:21067:21080:21324:21444:21451:21627:21939:30012:30054:30056:30070,0,RBL:209.85.219.68:@cmpxchg.org:.lbl8.mailshell.net-62.14.0.100 66.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY: none X-HE-Tag: tree53_1302f4d63393b X-Filterd-Recvd-Size: 10509 Received: from mail-qv1-f68.google.com (mail-qv1-f68.google.com [209.85.219.68]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Wed, 20 Nov 2019 16:58:50 +0000 (UTC) Received: by mail-qv1-f68.google.com with SMTP id g18so178262qvp.8 for ; Wed, 20 Nov 2019 08:58:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=hpIA+rvc+Ub15DwARGwCPsLQo64idg/OwWT6O3mpE6w=; b=JWQOya2UTVmtO3YTI3J++veMgOo/VuDvIh5JkQNpTTc5CoZly1XR0w+K3s2/9nah4Z Lf4udbLEoNM5yRNDtigxpPqCVFNUr3ddOdXT3J4WRH326Cl1KZVwMCvo/0or/mAXbbJc 7ndDVIb5LaaBTvFcVp9tiGBhnT8zG4zAG+GDR4hka42LtEM9yjtvplacvk12jKQlfcTr riLsiA8iJtP9ZK6LbWmV8IYv+4jyuxHurD+NKQDg9aC7WjCIe4eFpDGjxUL/Kiqzgtwy p9zgdkh9CrE3imotLw33zZnAacTznJz8BMsbPk67PT4MqDz8fy+XGUqTzA4f3LkSg68T nVAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=hpIA+rvc+Ub15DwARGwCPsLQo64idg/OwWT6O3mpE6w=; b=RZEywc7JL9c/07PyxMtCNMHfTu/EkWduY7dS7Xs/dYQrBsYXdBMJy2nLSgME5rZE32 30M4Y8XWJ6MhJp9OM97pKXhqL8VZ8tceqehV/eTERqcN2m7oUAEJmc+rV0mTpww6LWr2 LydmZW2hHKIMSzhO0ARYSnx2YODfQ2ygeaRohMn6wRl+RLzRpbizpaPWjUyZyW9veJ/H MlmCY6+0WUGavB8d9bGMU7JFYio716gXEYHTg4apjDy8jroNNaQRi/fKXzzBl0jcpprj xBfvwxIox5CEF1YXPelHygyIufEv2bCcN3tLp4HLb2f8yxg34cneCNbgSV51+tEIN7vr bJMw== X-Gm-Message-State: APjAAAUv7XzaHG6j4dZIOJDWdrYcizChWQRUX6qJd80zMOh17Q867Qpn F9eDSXgACwSEqStbq3ddM2ACFQ== X-Google-Smtp-Source: APXvYqxiTcyvcwZS6OaUR2diyQ9VgWPR8HMJDoANnPSA/6WdaWdJUN91QROYr7TbLLupkBa34MsurQ== X-Received: by 2002:a0c:d4ba:: with SMTP id u55mr3539945qvh.40.1574269128605; Wed, 20 Nov 2019 08:58:48 -0800 (PST) Received: from localhost ([2620:10d:c091:500::6900]) by smtp.gmail.com with ESMTPSA id 11sm12082528qkv.131.2019.11.20.08.58.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Nov 2019 08:58:47 -0800 (PST) From: Johannes Weiner To: Andrew Morton Cc: Hugh Dickins , Shakeel Butt , Michal Hocko , Alex Shi , Roman Gushchin , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH] mm: fix unsafe page -> lruvec lookups with cgroup charge migration Date: Wed, 20 Nov 2019 11:58:47 -0500 Message-Id: <20191120165847.423540-1-hannes@cmpxchg.org> X-Mailer: git-send-email 2.24.0 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: While reviewing the "per lruvec lru_lock for memcg" series, Hugh and I noticed two places in the existing code where the page -> memcg -> lruvec lookup can result in a use-after-free bug. This affects cgroup1 setups that have charge migration enabled. To pin page->mem_cgroup, callers need to either have the page locked, an exclusive refcount (0), or hold the lru_lock and "own" PageLRU (either ensure it's set, or be the one to hold the page in isolation) to make cgroup migration fail the isolation step. Failure to follow this can result in the page moving out of the memcg and freeing it, along with its lruvecs, while the observer is dereferencing them. 1. isolate_lru_page() calls mem_cgroup_page_lruvec() with the lru_lock held but before testing PageLRU. It doesn't dereference the returned lruvec before testing PageLRU, giving the impression that it might just be safe ordering after all - but mem_cgroup_page_lruvec() itself touches the lruvec to lazily initialize the pgdat back pointer. This one is easy to fix, just move the lookup into the PageLRU branch. 2. pagevec_lru_move_fn() conveniently looks up the lruvec for all the callbacks it might get invoked on. Unfortunately, it's the callbacks that first check PageLRU under the lru_lock, which makes this order equally unsafe as isolate_lru_page(). Remove the lruvec argument from the move callbacks and let them do it inside their PageLRU branches. Reported-by: Hugh Dickins Signed-off-by: Johannes Weiner Reviewed-by: Shakeel Butt Signed-off-by: Johannes Weiner --- mm/swap.c | 48 +++++++++++++++++++++++++++++------------------- mm/vmscan.c | 8 ++++---- 2 files changed, 33 insertions(+), 23 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index 5341ae93861f..6b015e9532fb 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -188,12 +188,11 @@ int get_kernel_page(unsigned long start, int write, struct page **pages) EXPORT_SYMBOL_GPL(get_kernel_page); static void pagevec_lru_move_fn(struct pagevec *pvec, - void (*move_fn)(struct page *page, struct lruvec *lruvec, void *arg), + void (*move_fn)(struct page *page, void *arg), void *arg) { int i; struct pglist_data *pgdat = NULL; - struct lruvec *lruvec; unsigned long flags = 0; for (i = 0; i < pagevec_count(pvec); i++) { @@ -207,8 +206,7 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, spin_lock_irqsave(&pgdat->lru_lock, flags); } - lruvec = mem_cgroup_page_lruvec(page, pgdat); - (*move_fn)(page, lruvec, arg); + (*move_fn)(page, arg); } if (pgdat) spin_unlock_irqrestore(&pgdat->lru_lock, flags); @@ -216,12 +214,14 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, pagevec_reinit(pvec); } -static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec, - void *arg) +static void pagevec_move_tail_fn(struct page *page, void *arg) { int *pgmoved = arg; if (PageLRU(page) && !PageUnevictable(page)) { + struct lruvec *lruvec; + + lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); del_page_from_lru_list(page, lruvec, page_lru(page)); ClearPageActive(page); add_page_to_lru_list_tail(page, lruvec, page_lru(page)); @@ -272,12 +272,14 @@ static void update_page_reclaim_stat(struct lruvec *lruvec, reclaim_stat->recent_rotated[file]++; } -static void __activate_page(struct page *page, struct lruvec *lruvec, - void *arg) +static void __activate_page(struct page *page, void *arg) { if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) { int file = page_is_file_cache(page); int lru = page_lru_base_type(page); + struct lruvec *lruvec; + + lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); del_page_from_lru_list(page, lruvec, lru); SetPageActive(page); @@ -328,7 +330,7 @@ void activate_page(struct page *page) page = compound_head(page); spin_lock_irq(&pgdat->lru_lock); - __activate_page(page, mem_cgroup_page_lruvec(page, pgdat), NULL); + __activate_page(page, NULL); spin_unlock_irq(&pgdat->lru_lock); } #endif @@ -498,9 +500,9 @@ void lru_cache_add_active_or_unevictable(struct page *page, * be write it out by flusher threads as this is much more effective * than the single-page writeout from reclaim. */ -static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, - void *arg) +static void lru_deactivate_file_fn(struct page *page, void *arg) { + struct lruvec *lruvec; int lru, file; bool active; @@ -518,6 +520,8 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, file = page_is_file_cache(page); lru = page_lru_base_type(page); + lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); + del_page_from_lru_list(page, lruvec, lru + active); ClearPageActive(page); ClearPageReferenced(page); @@ -544,12 +548,14 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, update_page_reclaim_stat(lruvec, file, 0); } -static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, - void *arg) +static void lru_deactivate_fn(struct page *page, void *arg) { if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) { int file = page_is_file_cache(page); int lru = page_lru_base_type(page); + struct lruvec *lruvec; + + lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); del_page_from_lru_list(page, lruvec, lru + LRU_ACTIVE); ClearPageActive(page); @@ -561,12 +567,14 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, } } -static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, - void *arg) +static void lru_lazyfree_fn(struct page *page, void *arg) { if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) && !PageSwapCache(page) && !PageUnevictable(page)) { bool active = PageActive(page); + struct lruvec *lruvec; + + lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); del_page_from_lru_list(page, lruvec, LRU_INACTIVE_ANON + active); @@ -921,15 +929,17 @@ void lru_add_page_tail(struct page *page, struct page *page_tail, } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ -static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, - void *arg) +static void __pagevec_lru_add_fn(struct page *page, void *arg) { - enum lru_list lru; int was_unevictable = TestClearPageUnevictable(page); + struct lruvec *lruvec; + enum lru_list lru; VM_BUG_ON_PAGE(PageLRU(page), page); - SetPageLRU(page); + + lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); + /* * Page becomes evictable in two ways: * 1) Within LRU lock [munlock_vma_page() and __munlock_pagevec()]. diff --git a/mm/vmscan.c b/mm/vmscan.c index df859b1d583c..3c8b81990146 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1767,15 +1767,15 @@ int isolate_lru_page(struct page *page) if (PageLRU(page)) { pg_data_t *pgdat = page_pgdat(page); - struct lruvec *lruvec; spin_lock_irq(&pgdat->lru_lock); - lruvec = mem_cgroup_page_lruvec(page, pgdat); if (PageLRU(page)) { - int lru = page_lru(page); + struct lruvec *lruvec; + + lruvec = mem_cgroup_page_lruvec(page, pgdat); get_page(page); ClearPageLRU(page); - del_page_from_lru_list(page, lruvec, lru); + del_page_from_lru_list(page, lruvec, page_lru(page)); ret = 0; } spin_unlock_irq(&pgdat->lru_lock);