From patchwork Sat Sep 19 04:20:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11786645 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7FCC359D for ; Sat, 19 Sep 2020 04:20:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3DAF1235F9 for ; Sat, 19 Sep 2020 04:20:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="0ipoy1tN" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3DAF1235F9 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 979DF6B0070; Sat, 19 Sep 2020 00:20:27 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8DACC6B0071; Sat, 19 Sep 2020 00:20:27 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6DBA86B0072; Sat, 19 Sep 2020 00:20:27 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0210.hostedemail.com [216.40.44.210]) by kanga.kvack.org (Postfix) with ESMTP id 587CF6B0070 for ; Sat, 19 Sep 2020 00:20:27 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 27E12180AD807 for ; Sat, 19 Sep 2020 04:20:27 +0000 (UTC) X-FDA: 77278509294.14.run81_2807c3c27130 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id 087FB18229818 for ; Sat, 19 Sep 2020 04:20:27 +0000 (UTC) X-Spam-Summary: 1,0,0,fd2bc94264714615,d41d8cd98f00b204,akpm@linux-foundation.org,,RULES_HIT:41:69:355:379:800:960:967:973:988:989:1260:1345:1359:1381:1431:1437:1535:1543:1711:1730:1747:1777:1792:2198:2199:2393:2525:2559:2563:2682:2685:2859:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4250:4321:5007:6119:6261:6653:6737:7514:7576:7903:7904:8660:8784:8957:9025:9545:9592:10004:11026:11473:11658:11914:12043:12048:12114:12291:12295:12296:12297:12438:12517:12519:12555:12679:12683:12986:13148:13161:13229:13230:13846:14093:14181:14721:21060:21080:21450:21451:21627:21809:21939:21990:30012:30054:30064:30070,0,RBL:198.145.29.99:@linux-foundation.org:.lbl8.mailshell.net-62.2.0.100 64.100.201.201;04yr3dema4fs37n78noq49dzscf7dypoe4pbxpa6h4gkpj88pakduhk9ohu9bro.ay83hf64nmmzwyz485cbdgyoqhi4n11thutuxatebuzdaagt1soehkbhrtpigqf.r-lbl8.mailshell.net-223.238.255.100,CacheIP :none,Ba X-HE-Tag: run81_2807c3c27130 X-Filterd-Recvd-Size: 5428 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Sat, 19 Sep 2020 04:20:26 +0000 (UTC) Received: from localhost.localdomain (c-71-198-47-131.hsd1.ca.comcast.net [71.198.47.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4312721D43; Sat, 19 Sep 2020 04:20:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600489225; bh=bbhbkMAaiTaRX+eyq+UtfFxMb5sRulwDpnswRN9D/44=; h=Date:From:To:Subject:In-Reply-To:From; b=0ipoy1tNSP6JHZdfQaQy0SvIOVEfcwLyfY5DY75mGeBhSDZ/CAF2uV+xs7deMPEGU w2cwvhfrAiTCDBH8wUlgYsF6J6rVbqwzGlWywJ2t61SngP4InoEPoaR0re1jPgLqqH bGDWJ2yt0ioU65l5zJfEYZFAOLFa7+byo3etaL8s= Date: Fri, 18 Sep 2020 21:20:24 -0700 From: Andrew Morton To: akpm@linux-foundation.org, apopple@nvidia.com, bharata@linux.ibm.com, bskeggs@redhat.com, hch@lst.de, jgg@nvidia.com, jglisse@redhat.com, jhubbard@nvidia.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, rcampbell@nvidia.com, shuah@kernel.org, shy828301@gmail.com, stable@vger.kernel.org, torvalds@linux-foundation.org, ziy@nvidia.com Subject: [patch 09/15] mm/thp: fix __split_huge_pmd_locked() for migration PMD Message-ID: <20200919042024.huPVYFOJa%akpm@linux-foundation.org> In-Reply-To: <20200918211925.7e97f0ef63d92f5cfe5ccbc5@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ralph Campbell Subject: mm/thp: fix __split_huge_pmd_locked() for migration PMD A migrating transparent huge page has to already be unmapped. Otherwise, the page could be modified while it is being copied to a new page and data could be lost. The function __split_huge_pmd() checks for a PMD migration entry before calling __split_huge_pmd_locked() leading one to think that __split_huge_pmd_locked() can handle splitting a migrating PMD. However, the code always increments the page->_mapcount and adjusts the memory control group accounting assuming the page is mapped. Also, if the PMD entry is a migration PMD entry, the call to is_huge_zero_pmd(*pmd) is incorrect because it calls pmd_pfn(pmd) instead of migration_entry_to_pfn(pmd_to_swp_entry(pmd)). Fix these problems by checking for a PMD migration entry. Link: https://lkml.kernel.org/r/20200903183140.19055-1-rcampbell@nvidia.com Fixes: 84c3fc4e9c56 ("mm: thp: check pmd migration entry in common path") Signed-off-by: Ralph Campbell Reviewed-by: Yang Shi Reviewed-by: Zi Yan Cc: Jerome Glisse Cc: John Hubbard Cc: Alistair Popple Cc: Christoph Hellwig Cc: Jason Gunthorpe Cc: Bharata B Rao Cc: Ben Skeggs Cc: Shuah Khan Cc: [4.14+] Signed-off-by: Andrew Morton --- mm/huge_memory.c | 42 +++++++++++++++++++++++------------------- 1 file changed, 23 insertions(+), 19 deletions(-) --- a/mm/huge_memory.c~mm-thp-fix-__split_huge_pmd_locked-for-migration-pmd +++ a/mm/huge_memory.c @@ -2022,7 +2022,7 @@ static void __split_huge_pmd_locked(stru put_page(page); add_mm_counter(mm, mm_counter_file(page), -HPAGE_PMD_NR); return; - } else if (is_huge_zero_pmd(*pmd)) { + } else if (pmd_trans_huge(*pmd) && is_huge_zero_pmd(*pmd)) { /* * FIXME: Do we want to invalidate secondary mmu by calling * mmu_notifier_invalidate_range() see comments below inside @@ -2116,30 +2116,34 @@ static void __split_huge_pmd_locked(stru pte = pte_offset_map(&_pmd, addr); BUG_ON(!pte_none(*pte)); set_pte_at(mm, addr, pte, entry); - atomic_inc(&page[i]._mapcount); - pte_unmap(pte); - } - - /* - * Set PG_double_map before dropping compound_mapcount to avoid - * false-negative page_mapped(). - */ - if (compound_mapcount(page) > 1 && !TestSetPageDoubleMap(page)) { - for (i = 0; i < HPAGE_PMD_NR; i++) + if (!pmd_migration) atomic_inc(&page[i]._mapcount); + pte_unmap(pte); } - lock_page_memcg(page); - if (atomic_add_negative(-1, compound_mapcount_ptr(page))) { - /* Last compound_mapcount is gone. */ - __dec_lruvec_page_state(page, NR_ANON_THPS); - if (TestClearPageDoubleMap(page)) { - /* No need in mapcount reference anymore */ + if (!pmd_migration) { + /* + * Set PG_double_map before dropping compound_mapcount to avoid + * false-negative page_mapped(). + */ + if (compound_mapcount(page) > 1 && + !TestSetPageDoubleMap(page)) { for (i = 0; i < HPAGE_PMD_NR; i++) - atomic_dec(&page[i]._mapcount); + atomic_inc(&page[i]._mapcount); + } + + lock_page_memcg(page); + if (atomic_add_negative(-1, compound_mapcount_ptr(page))) { + /* Last compound_mapcount is gone. */ + __dec_lruvec_page_state(page, NR_ANON_THPS); + if (TestClearPageDoubleMap(page)) { + /* No need in mapcount reference anymore */ + for (i = 0; i < HPAGE_PMD_NR; i++) + atomic_dec(&page[i]._mapcount); + } } + unlock_page_memcg(page); } - unlock_page_memcg(page); smp_wmb(); /* make pte visible before pmd */ pmd_populate(mm, pmd, pgtable);