From patchwork Fri Oct 6 03:59:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13410940 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CB8DE92FCA for ; Fri, 6 Oct 2023 04:00:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A11C4940011; Fri, 6 Oct 2023 00:00:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9C19094000B; Fri, 6 Oct 2023 00:00:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 88A43940011; Fri, 6 Oct 2023 00:00:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 74D8394000B for ; Fri, 6 Oct 2023 00:00:26 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 46B85C0180 for ; Fri, 6 Oct 2023 04:00:26 +0000 (UTC) X-FDA: 81313684452.04.B193C19 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf26.hostedemail.com (Postfix) with ESMTP id AF71214000C for ; Fri, 6 Oct 2023 04:00:24 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=none; spf=none (imf26.hostedemail.com: domain of riel@shelob.surriel.com has no SPF policy when checking 96.67.55.147) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696564824; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references; bh=Cdi1VObhi9/RrxVkC71uC7OCN+ZfJbW9EoREPRpsFUU=; b=oNyu6avIpBlO7ntIO//In4iyISDUQERDTX6EhehC9IVxuVRc5qPoQgzlu0D5Bel+VlOIhd eGz06Rsn2Hh7VPSWgnHnezFnkGKevIUbpY4Ut0MetTd1oKAoV7ABOWpk/9eQP46HoOjHLR /3fcdcV7HEu8UrNk3N+VI5vodm5uRmo= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; spf=none (imf26.hostedemail.com: domain of riel@shelob.surriel.com has no SPF policy when checking 96.67.55.147) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1696564824; a=rsa-sha256; cv=none; b=JBGAc8FPJuE6uXjqxYXtNPkDR84oB1tQrpyD5Bedp6KZCVZ96bne7t5i0TYtKi0gtSfekE ap55AHxaofM50hU0k5cPTT7ZghKPZ75DXjQJM5YDK3pGx1HhejTPOM/TIWiY6/F2904+5E Pk54OVHXTutS5hO/1H4Wo+1Mii8k46E= Received: from imladris.home.surriel.com ([10.0.13.28] helo=imladris.surriel.com) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.96) (envelope-from ) id 1qoc0k-0000mf-2n; Fri, 06 Oct 2023 00:00:22 -0400 From: riel@surriel.com To: linux-kernel@vger.kernel.org Cc: kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, muchun.song@linux.dev, mike.kravetz@oracle.com, leit@meta.com, willy@infradead.org Subject: [PATCH v7 0/4] hugetlbfs: close race between MADV_DONTNEED and page fault Date: Thu, 5 Oct 2023 23:59:05 -0400 Message-ID: <20231006040020.3677377-1-riel@surriel.com> X-Mailer: git-send-email 2.41.0 MIME-Version: 1.0 X-Rspamd-Queue-Id: AF71214000C X-Rspam-User: X-Stat-Signature: rturo8998zja531h1e8ainodh9kwcgyr X-Rspamd-Server: rspam01 X-HE-Tag: 1696564824-215802 X-HE-Meta: U2FsdGVkX19nOAawlpR45JFIvmEGiFd9+pTc+R0pAx01hB0WPnfWPtxuPQL8xTlTtM0ZdXr4sD9ZgPBiYK6eRs0Icd5LMM1kP4eOQFg0wUegkJiBIqAwjYQPvtumHEP3BqMtywk/nStCOFKfnJk2lbdAlJx1ZowKtAfDbqtqn/JLxfy30m5GcUOewaCNtV34LyeO4agQXaechjwc1nNDZAag53OYLeC9senFEo0ihBsDoh/w0Rxwuj23gCeiVayCourJaQjJV+d2cAuGojEme8+A5LkWQmpEyX2XRwp1B8VpUVJcvVV9LJXdQc8FsRlg+hKEq2cQ5Rnl8C8CAFRH++PoPA+IxVb6vmEeEMd9h2G/wfwhiexlfbG+onKgwHfhEEnf/0MIEuTDTSBaYnEs2TOixaRmUsicm2NvANfNGFD2XWho5+4DBiHaWaWwkHRRN3dN4/HNgkSX3coax3q8aapHY84NYV2zU4bK5SajzMGlcXCAMScZ9EZW28QR2+mfHIUIWP1NxPYsDrUr3tv3egfVHCMvgh1fkdvfigv1zY8BG/Q8p4hkDYD/8dMcNx1rJ58f0Jt0il9RQi0sVcotQcvX0X1l0NpJX3xidlclqd80rKNhNefzufJKFkv0mpZ43iufw7Q7F4cFR+UO+ep3CVL9R9+s0XzylzOp1Cz5bHKImiMmJJ6/zr32HD09ygxoE9gwPs4N75aPYUa7LKcDuNzfr63NjLlzU10vzDBFkTkXblqA+tC2l/sdBL7Xik9Ny0fbH5xRH0VRCh+PbVydqfckHiOpzViT4UsPHX2yU7YEp36iJ88sFRQYeUZv9iHq5wq4FwoVsDbeBsYAJL2jmw8t9YDqmW3Oh9ENDlqWCW73iJTZrtj8hMe0OjMPv9BD2PvvFjwgL9nK2mD8Vrb4a6eUDvosnWrK1e93Bkv7G22lm30+eBetMVCyDRIazgErF3gcK5bOfMs8EgVODR6 uGV2ZHi5 DqJe5rXMnrWUwPdgJAbRjXBeQJ3/6/aJpwmpBdNn41OdZ2SqMSa/Db8nzg5/rdnqUf2kPo53imLutYcmbGS1U58D211IUoLkciI34CewMPmwpq6z6WktQKIwKCGNbds5M2UlekAR25FqyMLbEz7wfbabwlb6Yjkecqf8XcKQbDTnhg8AIzSlJ8vZ9oMVgw/a0uKp8MGkr1ftVQCmVLs4MyQLj5vU4HkzbD/BmWpQzll76xErr24Eg8jueMfGtN3961gAojJhw10C3sM/1UleYAQTbHQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: v7: fix !vma->vm_file and hfill2 cases v6: move a fix from patch 3 to patch 2, more locking fixes v5: somehow a __vma_private_lock(vma) test failed to make it from my tree into the v4 series, fix that v4: fix unmap_vmas locking issue pointed out by Mike Kravetz, and resulting lockdep fallout v3: fix compile error w/ lockdep and test case errors with patch 3 v2: fix the locking bug found with the libhugetlbfs tests. Malloc libraries, like jemalloc and tcalloc, take decisions on when to call madvise independently from the code in the main application. This sometimes results in the application page faulting on an address, right after the malloc library has shot down the backing memory with MADV_DONTNEED. Usually this is harmless, because we always have some 4kB pages sitting around to satisfy a page fault. However, with hugetlbfs systems often allocate only the exact number of huge pages that the application wants. Due to TLB batching, hugetlbfs MADV_DONTNEED will free pages outside of any lock taken on the page fault path, which can open up the following race condition: CPU 1 CPU 2 MADV_DONTNEED unmap page shoot down TLB entry page fault fail to allocate a huge page killed with SIGBUS free page Fix that race by extending the hugetlb_vma_lock locking scheme to also cover private hugetlb mappings (with resv_map), and pulling the locking from __unmap_hugepage_final_range into helper functions called from zap_page_range_single. This ensures page faults stay locked out of the MADV_DONTNEED VMA until the huge pages have actually been freed. The third patch in the series is more of an RFC. Using the invalidate_lock instead of the hugetlb_vma_lock greatly simplifies the code, but at the cost of turning a per-VMA lock into a lock per backing hugetlbfs file, which could slow things down when multiple processes are mapping the same hugetlbfs file.