From patchwork Thu Jul 8 01:10:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12364461 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C18FFC07E9B for ; Thu, 8 Jul 2021 01:10:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8230261CC4 for ; Thu, 8 Jul 2021 01:10:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8230261CC4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 70D3E6B008A; Wed, 7 Jul 2021 21:10:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6E3E66B0092; Wed, 7 Jul 2021 21:10:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5ACAA6B0093; Wed, 7 Jul 2021 21:10:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0059.hostedemail.com [216.40.44.59]) by kanga.kvack.org (Postfix) with ESMTP id 38EA36B008A for ; Wed, 7 Jul 2021 21:10:18 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 9059D8248047 for ; Thu, 8 Jul 2021 01:10:17 +0000 (UTC) X-FDA: 78337639674.01.37B3035 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf23.hostedemail.com (Postfix) with ESMTP id D698D90000BB for ; Thu, 8 Jul 2021 01:10:16 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id AD8F461C77; Thu, 8 Jul 2021 01:10:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1625706616; bh=c58G42tAplLF5/KPACite1bg2ijNN7fTerdBh9raGmc=; h=Date:From:To:Subject:In-Reply-To:From; b=xCxAkZX7HilYDYIC+MpkYdEr1mRriJlIYn2ByL/9R1GOZ9KZlwqRVUIkJI48vNfYB OjA2eEUPeZKztSdga5m3CMRGaGGJJTohsVwbfBy+6byG+Cu9RuG9sXeAZcgP8DRLAm 0Jgtr1Q73LRZGzdvA4IjEr/2lpZPykx9wTYO418U= Date: Wed, 07 Jul 2021 18:10:15 -0700 From: Andrew Morton To: akpm@linux-foundation.org, aneesh.kumar@linux.ibm.com, christophe.leroy@csgroup.eu, hughd@google.com, joel@joelfernandes.org, kaleshsingh@google.com, kirill.shutemov@linux.intel.com, kirill@shutemov.name, linux-mm@kvack.org, mm-commits@vger.kernel.org, mpe@ellerman.id.au, npiggin@gmail.com, sfr@canb.auug.org.au, stable@vger.kernel.org, torvalds@linux-foundation.org Subject: [patch 51/54] mm/mremap: hold the rmap lock in write mode when moving page table entries. Message-ID: <20210708011015.CbAeaXmtO%akpm@linux-foundation.org> In-Reply-To: <20210707175950.eceddb86c6c555555d4730e2@linux-foundation.org> User-Agent: s-nail v14.8.16 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=xCxAkZX7; spf=pass (imf23.hostedemail.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: D698D90000BB X-Rspam-User: nil X-Stat-Signature: dzdrbf461d6mx3xmk7z87wjb6eaqdeds X-HE-Tag: 1625706616-494997 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Aneesh Kumar K.V" Subject: mm/mremap: hold the rmap lock in write mode when moving page table entries. To avoid a race between rmap walk and mremap, mremap does take_rmap_locks(). The lock was taken to ensure that rmap walk don't miss a page table entry due to PTE moves via move_pagetables(). The kernel does further optimization of this lock such that if we are going to find the newly added vma after the old vma, the rmap lock is not taken. This is because rmap walk would find the vmas in the same order and if we don't find the page table attached to older vma we would find it with the new vma which we would iterate later. As explained in commit eb66ae030829 ("mremap: properly flush TLB before releasing the page") mremap is special in that it doesn't take ownership of the page. The optimized version for PUD/PMD aligned mremap also doesn't hold the ptl lock. This can result in stale TLB entries as show below. This patch updates the rmap locking requirement in mremap to handle the race condition explained below with optimized mremap:: Optmized PMD move CPU 1 CPU 2 CPU 3 mremap(old_addr, new_addr) page_shrinker/try_to_unmap_one mmap_write_lock_killable() addr = old_addr lock(pte_ptl) lock(pmd_ptl) pmd = *old_pmd pmd_clear(old_pmd) flush_tlb_range(old_addr) *new_pmd = pmd *new_addr = 10; and fills TLB with new addr and old pfn unlock(pmd_ptl) ptep_clear_flush() old pfn is free. Stale TLB entry Optimized PUD move also suffers from a similar race. Both the above race condition can be fixed if we force mremap path to take rmap lock. Link: https://lkml.kernel.org/r/20210616045239.370802-7-aneesh.kumar@linux.ibm.com Fixes: 2c91bd4a4e2e ("mm: speed up mremap by 20x on large regions") Fixes: c49dd3401802 ("mm: speedup mremap on 1GB or larger regions") Link: https://lore.kernel.org/linux-mm/CAHk-=wgXVR04eBNtxQfevontWnP6FDm+oj5vauQXP3S-huwbPw@mail.gmail.com Signed-off-by: Aneesh Kumar K.V Acked-by: Hugh Dickins Acked-by: Kirill A. Shutemov Cc: Christophe Leroy Cc: Joel Fernandes Cc: Kalesh Singh Cc: Kirill A. Shutemov Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Stephen Rothwell Cc: Signed-off-by: Andrew Morton --- mm/mremap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/mm/mremap.c~mm-mremap-hold-the-rmap-lock-in-write-mode-when-moving-page-table-entries +++ a/mm/mremap.c @@ -504,7 +504,7 @@ unsigned long move_page_tables(struct vm } else if (IS_ENABLED(CONFIG_HAVE_MOVE_PUD) && extent == PUD_SIZE) { if (move_pgt_entry(NORMAL_PUD, vma, old_addr, new_addr, - old_pud, new_pud, need_rmap_locks)) + old_pud, new_pud, true)) continue; } @@ -531,7 +531,7 @@ unsigned long move_page_tables(struct vm * moving at the PMD level if possible. */ if (move_pgt_entry(NORMAL_PMD, vma, old_addr, new_addr, - old_pmd, new_pmd, need_rmap_locks)) + old_pmd, new_pmd, true)) continue; }