From patchwork Fri Jul 8 07:18:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 12910665 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1844CCA481 for ; Fri, 8 Jul 2022 07:20:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A86B76B007B; Fri, 8 Jul 2022 03:20:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A0EA76B0075; Fri, 8 Jul 2022 03:20:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 88697900002; Fri, 8 Jul 2022 03:20:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 7C4076B0074 for ; Fri, 8 Jul 2022 03:20:03 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 51DE033F50 for ; Fri, 8 Jul 2022 07:20:03 +0000 (UTC) X-FDA: 79663083486.10.D484311 Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) by imf21.hostedemail.com (Postfix) with ESMTP id D633E1C0055 for ; Fri, 8 Jul 2022 07:20:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=8Qg5PlV6I5tOH468X+xadgMPbp4kBu9FI4E8wShX5Lg=; b=I3s3dj+789fDlQeStJ3jAGPxDI SzoLztxojDq6Q51I6LzBcB7rOt4xSq7AtOARBSwmtKbhUyxOH5CkhGfObLcV/kedd+7szZ/vw3xY9 Msp8KKV50g9bNFw2Kiie3d1hAhR8DNXplc+6l5b/cA/Udbai53yn/AABQJYTnr4GtTuo2+5gpPIZ7 IG0LekJsIE0kTH4hQDw94HcYiW4c1T+O4/Z7jwUt+Q6wcZO/50zlmzDB1+mmRhRe417tDt8GP08bz o6O6jXq7bmsAnkO8NwyMYnf28wUmWWuMY+6eG4hJ+Cxx6aigzDd7Whk7eNOKIq9zLqAu27+zz6OjC 77HFJaPw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o9iHI-001dLu-Ii; Fri, 08 Jul 2022 07:19:53 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id B89DC301221; Fri, 8 Jul 2022 09:19:51 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 5A20E20CA4D1B; Fri, 8 Jul 2022 09:19:51 +0200 (CEST) Message-ID: <20220708071834.149930530@infradead.org> User-Agent: quilt/0.66 Date: Fri, 08 Jul 2022 09:18:06 +0200 From: Peter Zijlstra To: Jann Horn , Linus Torvalds , Will Deacon Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, peterz@infradead.org, Dave Airlie , Daniel Vetter , Andrew Morton , Guo Ren , David Miller Subject: [PATCH 4/4] mmu_gather: Force tlb-flush VM_PFNMAP vmas References: <20220708071802.751003711@infradead.org> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=desiato.20200630 header.b=I3s3dj+7; dmarc=none; spf=none (imf21.hostedemail.com: domain of peterz@infradead.org has no SPF policy when checking 90.155.92.199) smtp.mailfrom=peterz@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657264802; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:references:dkim-signature; bh=8Qg5PlV6I5tOH468X+xadgMPbp4kBu9FI4E8wShX5Lg=; b=Bc3uWQt45w2JJdYH94+y6+7KREWe9kfa6EX8KUKfG4oS8JqW5GnXvJro67Cy5CVvLB3wQd zctNaAZQwH95C5Mw5rTbpaW6/qLhzTCvEbgA/IkmDnK8uAOs3dIBuoz7N/1tkD8FZiqFU8 WSfECbdTfZYyr+cWGcH+5IdHa4tPiSE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657264802; a=rsa-sha256; cv=none; b=mAgpfCE8774pJdqbPcJEu2p3mdUhOiHEkV11pY0eRb7cAytOdps7HNaXfHRDbu9PMlv9ny vjRAm3w2w1ykA2w4JrlfelFaB0fQj5ZU6aRGSSvQ3apA0i434dBp7op+1AjFAGwYzQvuhB 9J+SXSPQoNW5e0IcHWVWJ13eLH6zDrw= X-Stat-Signature: 3sn8z36s3jat85kerhqeqbwrwbwguiin X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: D633E1C0055 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=desiato.20200630 header.b=I3s3dj+7; dmarc=none; spf=none (imf21.hostedemail.com: domain of peterz@infradead.org has no SPF policy when checking 90.155.92.199) smtp.mailfrom=peterz@infradead.org X-Rspam-User: X-HE-Tag: 1657264801-892134 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Jann reported a race between munmap() and unmap_mapping_range(), where unmap_mapping_range() will no-op once unmap_vmas() has unlinked the VMA; however munmap() will not yet have invalidated the TLBs. Therefore unmap_mapping_range() will complete while there are still (stale) TLB entries for the specified range. Mitigate this by force flushing TLBs for VM_PFNMAP ranges. Signed-off-by: Peter Zijlstra (Intel) Acked-by: Will Deacon Signed-off-by: Peter Zijlstra (Intel) Reported-by: Jann Horn Signed-off-by: Peter Zijlstra (Intel) --- include/asm-generic/tlb.h | 33 +++++++++++++++++---------------- 1 file changed, 17 insertions(+), 16 deletions(-) --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -303,6 +303,7 @@ struct mmu_gather { */ unsigned int vma_exec : 1; unsigned int vma_huge : 1; + unsigned int vma_pfn : 1; unsigned int batch_count; @@ -373,7 +374,6 @@ tlb_update_vma_flags(struct mmu_gather * #else /* CONFIG_MMU_GATHER_NO_RANGE */ #ifndef tlb_flush - /* * When an architecture does not provide its own tlb_flush() implementation * but does have a reasonably efficient flush_vma_range() implementation @@ -393,6 +393,9 @@ static inline void tlb_flush(struct mmu_ flush_tlb_range(&vma, tlb->start, tlb->end); } } +#endif + +#endif /* CONFIG_MMU_GATHER_NO_RANGE */ static inline void tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma) @@ -410,17 +413,9 @@ tlb_update_vma_flags(struct mmu_gather * */ tlb->vma_huge = is_vm_hugetlb_page(vma); tlb->vma_exec = !!(vma->vm_flags & VM_EXEC); + tlb->vma_pfn = !!(vma->vm_flags & VM_PFNMAP); } -#else - -static inline void -tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma) { } - -#endif - -#endif /* CONFIG_MMU_GATHER_NO_RANGE */ - static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) { /* @@ -507,16 +502,22 @@ static inline void tlb_start_vma(struct static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { - if (tlb->fullmm || IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) + if (tlb->fullmm) return; /* - * Do a TLB flush and reset the range at VMA boundaries; this avoids - * the ranges growing with the unused space between consecutive VMAs, - * but also the mmu_gather::vma_* flags from tlb_start_vma() rely on - * this. + * VM_PFNMAP is more fragile because the core mm will not track the + * page mapcount -- there might not be page-frames for these PFNs after + * all. Force flush TLBs for such ranges to avoid munmap() vs + * unmap_mapping_range() races. */ - tlb_flush_mmu_tlbonly(tlb); + if (tlb->vma_pfn || !IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) { + /* + * Do a TLB flush and reset the range at VMA boundaries; this avoids + * the ranges growing with the unused space between consecutive VMAs. + */ + tlb_flush_mmu_tlbonly(tlb); + } } /*