From patchwork Sun May 24 13:43:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konstantin Khlebnikov X-Patchwork-Id: 11567361 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 32820912 for ; Sun, 24 May 2020 13:43:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E771E2088E for ; Sun, 24 May 2020 13:43:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=yandex-team.ru header.i=@yandex-team.ru header.b="zg3yyA1A" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E771E2088E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=yandex-team.ru Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E6FAF80008; Sun, 24 May 2020 09:43:25 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DA90F80007; Sun, 24 May 2020 09:43:25 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C981E80008; Sun, 24 May 2020 09:43:25 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0122.hostedemail.com [216.40.44.122]) by kanga.kvack.org (Postfix) with ESMTP id AF15380007 for ; Sun, 24 May 2020 09:43:25 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 66F7252C9 for ; Sun, 24 May 2020 13:43:25 +0000 (UTC) X-FDA: 76851729570.07.cub97_750b7d5a8a259 X-Spam-Summary: 30,2,0,d40aaff99df5e4d4,d41d8cd98f00b204,khlebnikov@yandex-team.ru,,RULES_HIT:41:69:152:355:379:800:960:966:967:968:973:988:989:1260:1277:1311:1313:1314:1345:1437:1515:1516:1518:1535:1543:1593:1594:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2525:2559:2564:2682:2685:2731:2859:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3354:3866:3867:3868:3870:3871:3872:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4250:4321:4385:4605:5007:6119:6261:6653:7903:7904:8603:8784:8957:9025:9163:9207:9592:10004:10400:11026:11232:11473:11658:11914:12043:12114:12295:12296:12297:12438:12517:12519:12555:12679:12700:12737:12740:12760:12895:12986:13161:13229:13870:13904:14093:14096:14097:14157:14181:14394:14687:14721:14922:21080:21220:21365:21433:21451:21627:21740:21749:21811:21889:21939:21990:30012:30029:30054:30056:30070,0,RBL:95.108.205.193:@yandex-team.ru:.lbl8.mailshell.net-62.2.3.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:no ne,Domai X-HE-Tag: cub97_750b7d5a8a259 X-Filterd-Recvd-Size: 5271 Received: from forwardcorp1o.mail.yandex.net (forwardcorp1o.mail.yandex.net [95.108.205.193]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Sun, 24 May 2020 13:43:24 +0000 (UTC) Received: from mxbackcorp1o.mail.yandex.net (mxbackcorp1o.mail.yandex.net [IPv6:2a02:6b8:0:1a2d::301]) by forwardcorp1o.mail.yandex.net (Yandex) with ESMTP id 77A062E1476; Sun, 24 May 2020 16:43:21 +0300 (MSK) Received: from vla1-81430ab5870b.qloud-c.yandex.net (vla1-81430ab5870b.qloud-c.yandex.net [2a02:6b8:c0d:35a1:0:640:8143:ab5]) by mxbackcorp1o.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id YtsQiugMXN-hJY4qd4m; Sun, 24 May 2020 16:43:21 +0300 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru; s=default; t=1590327801; bh=3pIP3m8WwwbsJ4SnIjjtWyOI/ntT/CeolRaAW4AnY8Q=; h=Message-ID:Date:To:From:Subject:Cc; b=zg3yyA1AORDir5t5mLzATTqL15mEAKO+fwToV62hSav7ujKDBcTfCzhckdxMXmsvu uDmgrBNqzpi1A/NLrqW1OEup5obqipU38YcpDOxiJrTF/QYQW3UKwxD2pyCme+nYMF ML4j4LT2APgcccrFzhZWmXbhPc+drFXgfMcKW1TA= Authentication-Results: mxbackcorp1o.mail.yandex.net; dkim=pass header.i=@yandex-team.ru Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net [2a02:6b8:b081:606::1:1]) by vla1-81430ab5870b.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id 4roCE1W5A4-hJX4fh2O; Sun, 24 May 2020 16:43:19 +0300 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (Client certificate not present) Subject: [PATCH v2] mm: remove VM_BUG_ON(PageSlab()) from page_mapcount() From: Konstantin Khlebnikov To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton Cc: Hugh Dickins , Vlastimil Babka , David Rientjes , "Kirill A. Shutemov" Date: Sun, 24 May 2020 16:43:18 +0300 Message-ID: <159032779896.957378.7852761411265662220.stgit@buzz> User-Agent: StGit/0.22-39-gd257 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Replace superfluous VM_BUG_ON() with comment about correct usage. Technically reverts commit 1d148e218a0d0566b1c06f2f45f1436d53b049b2 ("mm: add VM_BUG_ON_PAGE() to page_mapcount()"), but context have changed. Function isolate_migratepages_block() runs some checks out of lru_lock when choose pages for migration. After checking PageLRU() it checks extra page references by comparing page_count() and page_mapcount(). Between these two checks page could be removed from lru, freed and taken by slab. As a result this race triggers VM_BUG_ON(PageSlab()) in page_mapcount(). Race window is tiny. For certain workload this happens around once a year. page:ffffea0105ca9380 count:1 mapcount:0 mapping:ffff88ff7712c180 index:0x0 compound_mapcount: 0 flags: 0x500000000008100(slab|head) raw: 0500000000008100 dead000000000100 dead000000000200 ffff88ff7712c180 raw: 0000000000000000 0000000080200020 00000001ffffffff 0000000000000000 page dumped because: VM_BUG_ON_PAGE(PageSlab(page)) ------------[ cut here ]------------ kernel BUG at ./include/linux/mm.h:628! invalid opcode: 0000 [#1] SMP NOPTI CPU: 77 PID: 504 Comm: kcompactd1 Tainted: G W 4.19.109-27 #1 Hardware name: Yandex T175-N41-Y3N/MY81-EX0-Y3N, BIOS R05 06/20/2019 RIP: 0010:isolate_migratepages_block+0x986/0x9b0 Code in isolate_migratepages_block() was added in commit 119d6d59dcc0 ("mm, compaction: avoid isolating pinned pages") before adding VM_BUG_ON into page_mapcount(). This race has been predicted in 2015 by Vlastimil Babka (see link below). Signed-off-by: Konstantin Khlebnikov Fixes: 1d148e218a0d ("mm: add VM_BUG_ON_PAGE() to page_mapcount()") Link: https://lore.kernel.org/lkml/557710E1.6060103@suse.cz/ Link: https://lore.kernel.org/linux-mm/158937872515.474360.5066096871639561424.stgit@buzz/T/ (v1) Acked-by: Hugh Dickins Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka --- include/linux/mm.h | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 5a323422d783..95f777f482ac 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -782,6 +782,11 @@ static inline void *kvcalloc(size_t n, size_t size, gfp_t flags) extern void kvfree(const void *addr); +/* + * Mapcount of compound page as a whole, not includes mapped sub-pages. + * + * Must be called only for compound pages or any their tail sub-pages. + */ static inline int compound_mapcount(struct page *page) { VM_BUG_ON_PAGE(!PageCompound(page), page); @@ -801,10 +806,15 @@ static inline void page_mapcount_reset(struct page *page) int __page_mapcount(struct page *page); +/* + * Mapcount of 0-order page, for sub-page includes compound_mapcount(). + * + * Result is undefined for pages which cannot be mapped into userspace. + * For example SLAB or special types of pages. See function page_has_type(). + * They use this place in struct page differently. + */ static inline int page_mapcount(struct page *page) { - VM_BUG_ON_PAGE(PageSlab(page), page); - if (unlikely(PageCompound(page))) return __page_mapcount(page); return atomic_read(&page->_mapcount) + 1;