From patchwork Mon Jan 27 19:53:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roman Gushchin X-Patchwork-Id: 13951689 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4588FC02188 for ; Mon, 27 Jan 2025 19:53:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B82422801A0; Mon, 27 Jan 2025 14:53:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B31E5280191; Mon, 27 Jan 2025 14:53:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A20C02801A0; Mon, 27 Jan 2025 14:53:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 8508E280191 for ; Mon, 27 Jan 2025 14:53:36 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 2DB311205F8 for ; Mon, 27 Jan 2025 19:53:36 +0000 (UTC) X-FDA: 83054281632.05.9358409 Received: from out-179.mta1.migadu.com (out-179.mta1.migadu.com [95.215.58.179]) by imf05.hostedemail.com (Postfix) with ESMTP id 4BD4510001E for ; Mon, 27 Jan 2025 19:53:34 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=NM7kH1sr; spf=pass (imf05.hostedemail.com: domain of roman.gushchin@linux.dev designates 95.215.58.179 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738007614; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=nnkGNTLvskPKX+krc93edAVPeknd45rshqikoa8lHn8=; b=jNM668vhLG5EBeThV45z5swekGvSInPPAJf7iUrmtbBcnB2ZCVcimwOAK67PkkPuVvj/TX RhvUGEfYfn6ROY67sgTLdqfM157lIHtuvu+rHdTDVBnIhBHuFhpjvPSj5+DGWcUZV+XnrM O2Si2THa3Ex6VfbHRn3EaByvRuM0GRo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738007614; a=rsa-sha256; cv=none; b=ShWE6aLTGMa5phUaJRlQ2rONQPWtvncYMyBwENB3wOCKCw9a5FkUD0sBlXe82uK/fUaCc+ iQNlajkteZbDrdqlzpWFXhWmoaTL65D0kNK8o/MRDJ3+yndjBG+/cgaZXiCEjrH6762j+F fVbjDBjzwVeBOHqh85TcIgkeceXeT0Y= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=NM7kH1sr; spf=pass (imf05.hostedemail.com: domain of roman.gushchin@linux.dev designates 95.215.58.179 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev; dmarc=pass (policy=none) header.from=linux.dev X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1738007612; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=nnkGNTLvskPKX+krc93edAVPeknd45rshqikoa8lHn8=; b=NM7kH1srmRC0B+tepoX5im3jW2S5lwyzfg758+BMaz1YAFAkNNcYth3KzC8HKS4gkXvBie llSsflYW+S9Hhvdnag0ml7w2S30g63kqz4L7nUaVTgDbCFf2orRTm3vPMzSbL3TY6Cq12f g0AcwLNaxn6ME9Zo7kYZ+6sq+3HZArI= From: Roman Gushchin To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , Roman Gushchin , Jann Horn , Peter Zijlstra , Will Deacon , "Aneesh Kumar K.V" , Nick Piggin , Hugh Dickins , linux-arch@vger.kernel.org Subject: [PATCH v4] mmu_gather: move tlb flush for VM_PFNMAP/VM_MIXEDMAP vmas into free_pgtables() Date: Mon, 27 Jan 2025 19:53:21 +0000 Message-ID: <20250127195321.35779-1-roman.gushchin@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Stat-Signature: xjiq8t13iz8ar8fe99gwdcf4euxgufab X-Rspamd-Queue-Id: 4BD4510001E X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1738007614-449089 X-HE-Meta: U2FsdGVkX1/U1DINx22+OvbIU6f5F62bGZFlFda3Q8lXEkgEQ9bSTZAxQsKkqFJa2rUko6ZsfXe0qJWIT6Ww6g6KXwOFwmb2DZ6zAWfcO3EwJALNq/EboSBfNAPSoSX580dJfgVEBDX8TlDax2YDdfQpxHWh2Eo3dZulwPfJ+qfmonZaihtsv6SWcjfIKdRDXipqrSB5Lt62fdSZ875OX18X9EZ6dyWrO37IglwRXimEu4KKdL+PMwrA+78YKIhfWTGJbtG3EDKv9cBLnUxI1wAmID1l+FP1a1LpMWAYjYJteYBS5L/OTvRvUNGKk7E0x4uSb/NJbaIqkuSVUrLhdvTX3s3AgXgn6rrJdmCYoCzB1VGBRfm1Ow8PFQNNWPRTV6ZkQDDD7s1l/ZhDj9J3+MI3iq0W6smWrxRBdb4lWxONNkc9KsWs1+GPywnPQvecjp3DO/EzJ8+bYCOmv3F3dVzKHIuUXA6/2eli+wy1ignDRe42I2T+SMgkY7cb/9NmhFH2auXF6khHGMpic30b6TWYkfnweUthXUstYJ5XgHYAzTqPojZne+Xg4IH44ZA5bBzjc8z8vd+8OpcKjdAkjZw2tWTpL+LSQt6GwClLHRmAbC0taU0XyTaeGwveboO/0ACA9J3o1t/Bb9BFb+SFj/AA0yjFEs14Gl2D2x4Wn4IKSTyQLtBbiVvgnkzr5fdGK6TstaVHmmfAs+iaDnv+LN2hNbaKijjM9PFZSKJ4YVrs6yt4+PsYKzNZ3kJ1JrOaEVY0rZK7cjxsDyYW0cfq+MUOtuRAp9izoPe/IixLcqgwUlxvmqKB+YCHZ+isfvzIqFLQEOEN8BWdUkqbNmLv3zvW5ZXRcYeY4vANCVD7X3eY6hMjFTKEi1hpOkGMqvNGONrUtdzs3ud/lfaV0t6FJXJVm3kl9WoJA2NMgQL8fuRBpJ86tZ2myfbd+p6KBbAnl5MxjAH0eUr6oZIZtJQ O4gVgX4q xFvakzeKOvmfzjeuMGHnEOGpewwTkEgTs3vhhuO+1xCTWZzBmTcepfBTOrd/ne3UkackRJuYOsHLczSO3kIX8DVhev1txjPemvrjjAQ2oRZo7bgxSpHXmAi9h7sgdtBa8Wfm4WVQJZ9KAR1VMPA7JlZMZUwafcqoWhIu2yH5BcefH9zek9lJDws7IGX2580XhelKm4VbGSvMNctJPmC5NEic+rUCD3pGTjOChwnlBKGboiBv+ZE01kpsJc4385dJKnU428hzrsPCmMxKoPJXjgWKSGqYLn4wRaumjRfV0FbCtlh3W8Rq0zoEAAvrS4rjZUWtMIAcPeH54C1ZEo7Ej3zfVO8LmEVBcBKNFRQAmTDp66bBKf/gqqmwvmfhxYMZGEbNvdJyWf8RfJfgAKdnuWJYHj4FHYIkoq0xdGNwEj4RQhhO9rH17GYZMyO3v8Mna9jXpi4esvYmxU/6HYZOj3ZUDGQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Commit b67fbebd4cf9 ("mmu_gather: Force tlb-flush VM_PFNMAP vmas") added a forced tlbflush to tlb_vma_end(), which is required to avoid a race between munmap() and unmap_mapping_range(). However it added some overhead to other paths where tlb_vma_end() is used, but vmas are not removed, e.g. madvise(MADV_DONTNEED). Fix this by moving the tlb flush out of tlb_end_vma() into free_pgtables(), somewhat similar to the stable version of the original commit: e.g. stable commit 895428ee124a ("mm: Force TLB flush for PFNMAP mappings before unlink_file_vma()"). Note, that if tlb->fullmm is set, no flush is required, as the whole mm is about to be destroyed. Signed-off-by: Roman Gushchin Cc: Jann Horn Cc: Peter Zijlstra Cc: Will Deacon Cc: "Aneesh Kumar K.V" Cc: Andrew Morton Cc: Nick Piggin Cc: Hugh Dickins Cc: linux-arch@vger.kernel.org Cc: linux-mm@kvack.org --- v4: - naming/comments update (by Peter Z.) - check vma->vma->vm_flags in tlb_free_vma() (by Peter Z.) v3: - added initialization of vma_pfn in __tlb_reset_range() (by Hugh D.) v2: - moved vma_pfn flag handling into tlb.h (by Peter Z.) - added comments (by Peter Z.) - fixed the vma_pfn flag setting (by Hugh D.) --- include/asm-generic/tlb.h | 49 +++++++++++++++++++++++++++++++-------- mm/memory.c | 2 ++ 2 files changed, 41 insertions(+), 10 deletions(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index e402aef79c93..dd673ec59893 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -58,6 +58,11 @@ * Defaults to flushing at tlb_end_vma() to reset the range; helps when * there's large holes between the VMAs. * + * - tlb_free_vma() + * + * tlb_free_vma() marks the start of unlinking the vma and freeing + * page-tables. + * * - tlb_remove_table() * * tlb_remove_table() is the basic primitive to free page-table directories @@ -400,7 +405,10 @@ static inline void __tlb_reset_range(struct mmu_gather *tlb) * Do not reset mmu_gather::vma_* fields here, we do not * call into tlb_start_vma() again to set them if there is an * intermediate flush. + * + * Except for vma_pfn, that only cares if there's pending TLBI. */ + tlb->vma_pfn = 0; } #ifdef CONFIG_MMU_GATHER_NO_RANGE @@ -465,7 +473,12 @@ tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma) */ tlb->vma_huge = is_vm_hugetlb_page(vma); tlb->vma_exec = !!(vma->vm_flags & VM_EXEC); - tlb->vma_pfn = !!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)); + + /* + * Track if there's at least one VM_PFNMAP/VM_MIXEDMAP vma + * in the tracked range, see tlb_free_vma(). + */ + tlb->vma_pfn |= !!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)); } static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) @@ -564,23 +577,39 @@ static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct * } static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) +{ + if (tlb->fullmm || IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) + return; + + /* + * Do a TLB flush and reset the range at VMA boundaries; this avoids + * the ranges growing with the unused space between consecutive VMAs, + * but also the mmu_gather::vma_* flags from tlb_start_vma() rely on + * this. + */ + tlb_flush_mmu_tlbonly(tlb); +} + +static inline void tlb_free_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { if (tlb->fullmm) return; /* * VM_PFNMAP is more fragile because the core mm will not track the - * page mapcount -- there might not be page-frames for these PFNs after - * all. Force flush TLBs for such ranges to avoid munmap() vs - * unmap_mapping_range() races. + * page mapcount -- there might not be page-frames for these PFNs + * after all. + * + * Specifically() there is a race between munmap() and + * unmap_mapping_range(), where munmap() will unlink the VMA, such + * that unmap_mapping_range() will no longer observe the VMA and + * no-op, without observing the TLBI, returning prematurely. + * + * So if we're about to unlink such a VMA, and we have pending + * TLBI for such a vma, flush things now. */ - if (tlb->vma_pfn || !IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) { - /* - * Do a TLB flush and reset the range at VMA boundaries; this avoids - * the ranges growing with the unused space between consecutive VMAs. - */ + if ((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) && tlb->vma_pfn) tlb_flush_mmu_tlbonly(tlb); - } } /* diff --git a/mm/memory.c b/mm/memory.c index 539c0f7c6d54..4ea5e286c68f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -378,6 +378,7 @@ void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas, if (unlikely(xa_is_zero(next))) next = NULL; + tlb_free_vma(tlb, vma); /* * Hide vma from rmap and truncate_pagecache before freeing * pgtables @@ -403,6 +404,7 @@ void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas, next = mas_find(mas, ceiling - 1); if (unlikely(xa_is_zero(next))) next = NULL; + tlb_free_vma(tlb, vma); if (mm_wr_locked) vma_start_write(vma); unlink_anon_vmas(vma);