From patchwork Fri Oct 6 03:59:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13410944 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CFAEE92FCA for ; Fri, 6 Oct 2023 04:06:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 18C8B940011; Fri, 6 Oct 2023 00:06:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 13CA694000B; Fri, 6 Oct 2023 00:06:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 005D7940011; Fri, 6 Oct 2023 00:06:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E531894000B for ; Fri, 6 Oct 2023 00:06:49 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id A930F12017F for ; Fri, 6 Oct 2023 04:06:49 +0000 (UTC) X-FDA: 81313700538.28.273F999 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf29.hostedemail.com (Postfix) with ESMTP id 1691212001A for ; Fri, 6 Oct 2023 04:06:47 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; dmarc=none; spf=none (imf29.hostedemail.com: domain of riel@shelob.surriel.com has no SPF policy when checking 96.67.55.147) smtp.mailfrom=riel@shelob.surriel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696565208; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3v2sa2I/jNDblB49AcjwnliOJc3UVNmpYzvaB+HZ4GI=; b=60tH3h8HESpcCo/hHkPiXLsoOoJvFjbPH8sw5YbMrCE63BXRa8ohlTDpfXXbmnEeEXPDr2 VeVkUwS3rfgWG4xL4mkuURI1W5i9MNQzqF69qFC+VR/8fmqtlNMSXThqMswNLqrsSKn+A8 qmmdkrd2tpG/ga2pTfQGLasW/SUiaHI= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; dmarc=none; spf=none (imf29.hostedemail.com: domain of riel@shelob.surriel.com has no SPF policy when checking 96.67.55.147) smtp.mailfrom=riel@shelob.surriel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1696565208; a=rsa-sha256; cv=none; b=NCR+PJJHPJhFMOZ9pGV/jjRx1woZ8yOkobq2QixMYQJO0yUtqjY2TEWss6U3ATHoimw14S nczaLSdBL5zFktun+BI9Wfj/LGYXi7ugok+9ReOoPnG/p9jt2V5JsXBXoAL5PmEwk7nlvF snVO2C3nm4Exf3RPyKQZQK7xqxIR6yE= Received: from imladris.home.surriel.com ([10.0.13.28] helo=imladris.surriel.com) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.96) (envelope-from ) id 1qoc0l-0000mf-0C; Fri, 06 Oct 2023 00:00:23 -0400 From: riel@surriel.com To: linux-kernel@vger.kernel.org Cc: kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, muchun.song@linux.dev, mike.kravetz@oracle.com, leit@meta.com, willy@infradead.org, Rik van Riel , stable@kernel.org Subject: [PATCH 3/4] hugetlbfs: close race between MADV_DONTNEED and page fault Date: Thu, 5 Oct 2023 23:59:08 -0400 Message-ID: <20231006040020.3677377-4-riel@surriel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231006040020.3677377-1-riel@surriel.com> References: <20231006040020.3677377-1-riel@surriel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1691212001A X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: n4g6jawk79pufa36erxpz7d6shb33t7w X-HE-Tag: 1696565207-205144 X-HE-Meta: U2FsdGVkX1/Iug8r5TvFLBT4up8Tj+pGQe944URemXQib+lKWvcGrfTDRjK09HvtW1qY2RWcywlYTBSyJ41jbI14ImsNQZzXENw5eutQ8WxQVdRNAK2D6VKpGKLjYzaRZGXI9Zy9jIkWriZabLzDXlf4vg8BC6LjCTV02sG3APhcS0kkb/SEzgo7+RnLBjWk84dCcdzsVT9O0i9YEqulIjZb4XBBAZ98JPZBAuSVeUVBFMNMC8roQzqaF5VIKFDZw32oS4xlxpf3zm9BzPP0Zcy6auYrIkiOzd/YoSLvrCoz3eSEBomhONM9Qbcc6phegcHHhl9RJOZ/WUmtKACvfg/heTU7zgKJXUGLSMxroUgrt4LydbvNuWhjC+GuB+Z0dp3Nq4HpK+hncQ/Dt0mKIv8CG0Y8ixmqW4El7c4ffk+QC5emd4/buHMoi97LqqxCEoq3pI6tZ6a9RQgPDM4ifdI+XXpSpjkM/WyQ+EinNy0Y2QyBgPWAR6ikpdlLqtdxMm3mMvxIuKH8o5T1ZqjF+OckvHsq5U9Dgr3wiE5+tWDE9qq57ok2KxeNGrh01REe+3rlhO1ut5Sl+d6qP3Nu/cvMVUp4L9AdIsGtbSLMkvXIIv3dC0NkHxJX2ygq3gbLxihKd8vpFINe5ACrjiJvHNmjIDjEWLm8FaVlUXntbFxMlkXfAxJhe9N1I7vXHXZhqgCt3KK/LmO6SG+Egnjkca55UUmcS89E3mhomtxhVsQr53n2pEFUAVkqaoVC1EH7Vo4r0yovznswUCuUKNbUev/jR9qe/Qx3oaFugvcWvA3d0ZJRqlWOzFpXWw5TUB7NZGloRWMBHT71qID1SMPT7P94CevIjJQ/pXUNWkwyiO6tXOB2RbxJv7etzegq0vccSwS87r81LvhS6TewA6ynv5UvWy1ZN3dQzFN9hoBharq7EQrsjBneDz96WObI68J+2cMLK8VjHwOR9UrQ8oI u+OVzbiA M9NVu6T94sec2jIkErl8H6vfwuotUuZZWdy2UXHXbFp/VaXC0HLPWIXd4lxxUD1XcGkXQKYskZnjyLj5NhUDUhOyWmQCCkWDkotONzqNAlCgUY+9/ZWK7hItzzHkcYO6cknT0oDeA+Fl3I5SUQqQ3+ylFpGYr80NRBZlYloUbiyoSgadladbqeXm8Trvj35skQaGtisKaQ9pky/83TikvbZGzK+g0aR5x0uLhi+aYGoXhQj0hwc4nlT9fqA8//qeJDy561h3JEA9IkJM5DfHKz/RqTjWFcgxEzHmZyQxed5tkoKglwG/9NXWBuCXMz3Gw/0/e0LMC/I/BHKVfByI4bgfMh2HX9rNJrkfJ9sdXxGUkwUpWlgQZgfj6jwmDk9DeX3z4xQhoju5PAM2kZlM5tVjQVA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Rik van Riel Malloc libraries, like jemalloc and tcalloc, take decisions on when to call madvise independently from the code in the main application. This sometimes results in the application page faulting on an address, right after the malloc library has shot down the backing memory with MADV_DONTNEED. Usually this is harmless, because we always have some 4kB pages sitting around to satisfy a page fault. However, with hugetlbfs systems often allocate only the exact number of huge pages that the application wants. Due to TLB batching, hugetlbfs MADV_DONTNEED will free pages outside of any lock taken on the page fault path, which can open up the following race condition: CPU 1 CPU 2 MADV_DONTNEED unmap page shoot down TLB entry page fault fail to allocate a huge page killed with SIGBUS free page Fix that race by pulling the locking from __unmap_hugepage_final_range into helper functions called from zap_page_range_single. This ensures page faults stay locked out of the MADV_DONTNEED VMA until the huge pages have actually been freed. Signed-off-by: Rik van Riel Cc: stable@kernel.org Fixes: 04ada095dcfc ("hugetlb: don't delete vma_lock in hugetlb MADV_DONTNEED processing") Reviewed-by: Mike Kravetz --- include/linux/hugetlb.h | 35 +++++++++++++++++++++++++++++++++-- mm/hugetlb.c | 34 ++++++++++++++++++++++------------ mm/memory.c | 13 ++++++++----- 3 files changed, 63 insertions(+), 19 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 694928fa06a3..d9ec500cfef9 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -139,7 +139,7 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, void unmap_hugepage_range(struct vm_area_struct *, unsigned long, unsigned long, struct page *, zap_flags_t); -void __unmap_hugepage_range_final(struct mmu_gather *tlb, +void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, struct page *ref_page, zap_flags_t zap_flags); @@ -246,6 +246,25 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma, void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, unsigned long *start, unsigned long *end); +extern void __hugetlb_zap_begin(struct vm_area_struct *vma, + unsigned long *begin, unsigned long *end); +extern void __hugetlb_zap_end(struct vm_area_struct *vma, + struct zap_details *details); + +static inline void hugetlb_zap_begin(struct vm_area_struct *vma, + unsigned long *start, unsigned long *end) +{ + if (is_vm_hugetlb_page(vma)) + __hugetlb_zap_begin(vma, start, end); +} + +static inline void hugetlb_zap_end(struct vm_area_struct *vma, + struct zap_details *details) +{ + if (is_vm_hugetlb_page(vma)) + __hugetlb_zap_end(vma, details); +} + void hugetlb_vma_lock_read(struct vm_area_struct *vma); void hugetlb_vma_unlock_read(struct vm_area_struct *vma); void hugetlb_vma_lock_write(struct vm_area_struct *vma); @@ -297,6 +316,18 @@ static inline void adjust_range_if_pmd_sharing_possible( { } +static inline void hugetlb_zap_begin( + struct vm_area_struct *vma, + unsigned long *start, unsigned long *end) +{ +} + +static inline void hugetlb_zap_end( + struct vm_area_struct *vma, + struct zap_details *details) +{ +} + static inline struct page *hugetlb_follow_page_mask( struct vm_area_struct *vma, unsigned long address, unsigned int flags, unsigned int *page_mask) @@ -442,7 +473,7 @@ static inline long hugetlb_change_protection( return 0; } -static inline void __unmap_hugepage_range_final(struct mmu_gather *tlb, +static inline void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, struct page *ref_page, zap_flags_t zap_flags) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index dd3de6ec8f1a..552c2e3221bd 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5305,9 +5305,9 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, return len + old_addr - old_end; } -static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, - unsigned long start, unsigned long end, - struct page *ref_page, zap_flags_t zap_flags) +void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, + unsigned long start, unsigned long end, + struct page *ref_page, zap_flags_t zap_flags) { struct mm_struct *mm = vma->vm_mm; unsigned long address; @@ -5434,16 +5434,25 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct tlb_flush_mmu_tlbonly(tlb); } -void __unmap_hugepage_range_final(struct mmu_gather *tlb, - struct vm_area_struct *vma, unsigned long start, - unsigned long end, struct page *ref_page, - zap_flags_t zap_flags) +void __hugetlb_zap_begin(struct vm_area_struct *vma, + unsigned long *start, unsigned long *end) { + if (!vma->vm_file) /* hugetlbfs_file_mmap error */ + return; + + adjust_range_if_pmd_sharing_possible(vma, start, end); hugetlb_vma_lock_write(vma); - i_mmap_lock_write(vma->vm_file->f_mapping); + if (vma->vm_file) + i_mmap_lock_write(vma->vm_file->f_mapping); +} - /* mmu notification performed in caller */ - __unmap_hugepage_range(tlb, vma, start, end, ref_page, zap_flags); +void __hugetlb_zap_end(struct vm_area_struct *vma, + struct zap_details *details) +{ + zap_flags_t zap_flags = details ? details->zap_flags : 0; + + if (!vma->vm_file) /* hugetlbfs_file_mmap error */ + return; if (zap_flags & ZAP_FLAG_UNMAP) { /* final unmap */ /* @@ -5456,11 +5465,12 @@ void __unmap_hugepage_range_final(struct mmu_gather *tlb, * someone else. */ __hugetlb_vma_unlock_write_free(vma); - i_mmap_unlock_write(vma->vm_file->f_mapping); } else { - i_mmap_unlock_write(vma->vm_file->f_mapping); hugetlb_vma_unlock_write(vma); } + + if (vma->vm_file) + i_mmap_unlock_write(vma->vm_file->f_mapping); } void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, diff --git a/mm/memory.c b/mm/memory.c index 6c264d2f969c..517221f01303 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1683,7 +1683,7 @@ static void unmap_single_vma(struct mmu_gather *tlb, if (vma->vm_file) { zap_flags_t zap_flags = details ? details->zap_flags : 0; - __unmap_hugepage_range_final(tlb, vma, start, end, + __unmap_hugepage_range(tlb, vma, start, end, NULL, zap_flags); } } else @@ -1728,8 +1728,12 @@ void unmap_vmas(struct mmu_gather *tlb, struct ma_state *mas, start_addr, end_addr); mmu_notifier_invalidate_range_start(&range); do { - unmap_single_vma(tlb, vma, start_addr, end_addr, &details, + unsigned long start = start_addr; + unsigned long end = end_addr; + hugetlb_zap_begin(vma, &start, &end); + unmap_single_vma(tlb, vma, start, end, &details, mm_wr_locked); + hugetlb_zap_end(vma, &details); } while ((vma = mas_find(mas, tree_end - 1)) != NULL); mmu_notifier_invalidate_range_end(&range); } @@ -1753,9 +1757,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address, lru_add_drain(); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm, address, end); - if (is_vm_hugetlb_page(vma)) - adjust_range_if_pmd_sharing_possible(vma, &range.start, - &range.end); + hugetlb_zap_begin(vma, &range.start, &range.end); tlb_gather_mmu(&tlb, vma->vm_mm); update_hiwater_rss(vma->vm_mm); mmu_notifier_invalidate_range_start(&range); @@ -1766,6 +1768,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address, unmap_single_vma(&tlb, vma, address, end, details, false); mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); + hugetlb_zap_end(vma, details); } /**