From patchwork Fri Mar 6 06:28:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11423165 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E579992A for ; Fri, 6 Mar 2020 06:28:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A8D5E208C3 for ; Fri, 6 Mar 2020 06:28:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="BDeqZLS3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A8D5E208C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D602E6B0007; Fri, 6 Mar 2020 01:28:31 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D0F946B0008; Fri, 6 Mar 2020 01:28:31 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C26FA6B000A; Fri, 6 Mar 2020 01:28:31 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0120.hostedemail.com [216.40.44.120]) by kanga.kvack.org (Postfix) with ESMTP id AC1A76B0007 for ; Fri, 6 Mar 2020 01:28:31 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 7FD218248068 for ; Fri, 6 Mar 2020 06:28:31 +0000 (UTC) X-FDA: 76563958422.06.thing55_102f4fec56d2b X-Spam-Summary: 2,0,0,d5d4d891125648d9,d41d8cd98f00b204,akpm@linux-foundation.org,,RULES_HIT:41:355:379:800:960:967:973:988:989:1260:1263:1345:1359:1381:1431:1437:1534:1542:1711:1730:1747:1777:1792:1981:2194:2198:2199:2200:2393:2525:2553:2559:2563:2682:2685:2859:2902:2918:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4250:4321:5007:6119:6120:6261:6653:6737:7576:7901:7903:8599:9025:9545:10004:10913:11026:11473:11658:11914:12043:12048:12114:12297:12438:12517:12519:12555:12679:12783:12986:13161:13229:13846:14093:14181:14721:14849:21080:21451:21627:21809:21939:21990:30003:30054:30056:30064:30070:30090,0,RBL:198.145.29.99:@linux-foundation.org:.lbl8.mailshell.net-62.2.0.100 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:22,LUA_SUMMARY:none X-HE-Tag: thing55_102f4fec56d2b X-Filterd-Recvd-Size: 3690 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Fri, 6 Mar 2020 06:28:31 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D58F120866; Fri, 6 Mar 2020 06:28:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1583476110; bh=uyfzSbFv5luzHl1YAVE2dYWKKdeQ2J6g8VwGs5QOymA=; h=Date:From:To:Subject:In-Reply-To:From; b=BDeqZLS3itKiCC/fOOY7a3MJqQW7AUddIs/NHz56lvImWgJ2CwzJAabqyuUGHz/ai n2S6gST3MsaayX5bRhWSesyQLDK+RxvXQSqLlVIjtj7kltYyxB0cQAwfeBZ4n8iR63 TBMkaYI50ATi9FYgDSckrtGt1DztLst5J3O6opnQ= Date: Thu, 05 Mar 2020 22:28:29 -0800 From: Andrew Morton To: aarcange@redhat.com, akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, linux-mm@kvack.org, mhocko@kernel.org, mm-commits@vger.kernel.org, stable@vger.kernel.org, torvalds@linux-foundation.org, vbabka@suse.cz, william.kucharski@oracle.com, ying.huang@intel.com, ziy@nvidia.com Subject: [patch 2/7] mm: fix possible PMD dirty bit lost in set_pmd_migration_entry() Message-ID: <20200306062829.35P2DyOg_%akpm@linux-foundation.org> In-Reply-To: <20200305222751.6d781a3f2802d79510941e4e@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Huang Ying Subject: mm: fix possible PMD dirty bit lost in set_pmd_migration_entry() In set_pmd_migration_entry(), pmdp_invalidate() is used to change PMD atomically. But the PMD is read before that with an ordinary memory reading. If the THP (transparent huge page) is written between the PMD reading and pmdp_invalidate(), the PMD dirty bit may be lost, and cause data corruption. The race window is quite small, but still possible in theory, so need to be fixed. The race is fixed via using the return value of pmdp_invalidate() to get the original content of PMD, which is a read/modify/write atomic operation. So no THP writing can occur in between. The race has been introduced when the THP migration support is added in the commit 616b8371539a ("mm: thp: enable thp migration in generic path"). But this fix depends on the commit d52605d7cb30 ("mm: do not lose dirty and accessed bits in pmdp_invalidate()"). So it's easy to be backported after v4.16. But the race window is really small, so it may be fine not to backport the fix at all. Link: http://lkml.kernel.org/r/20200220075220.2327056-1-ying.huang@intel.com Signed-off-by: "Huang, Ying" Reviewed-by: William Kucharski Acked-by: Kirill A. Shutemov Reviewed-by: Zi Yan Cc: Andrea Arcangeli Cc: Michal Hocko Cc: Vlastimil Babka Cc: Signed-off-by: Andrew Morton --- mm/huge_memory.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) --- a/mm/huge_memory.c~mm-fix-possible-pmd-dirty-bit-lost-in-set_pmd_migration_entry +++ a/mm/huge_memory.c @@ -3043,8 +3043,7 @@ void set_pmd_migration_entry(struct page return; flush_cache_range(vma, address, address + HPAGE_PMD_SIZE); - pmdval = *pvmw->pmd; - pmdp_invalidate(vma, address, pvmw->pmd); + pmdval = pmdp_invalidate(vma, address, pvmw->pmd); if (pmd_dirty(pmdval)) set_page_dirty(page); entry = make_migration_entry(page, pmd_write(pmdval));