From patchwork Fri Oct 6 03:59:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13410941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CAC6E92FCD for ; Fri, 6 Oct 2023 04:00:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 83DF9940012; Fri, 6 Oct 2023 00:00:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7ED0194000B; Fri, 6 Oct 2023 00:00:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6B4F9940012; Fri, 6 Oct 2023 00:00:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 5A58E94000B for ; Fri, 6 Oct 2023 00:00:38 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 28FECA0155 for ; Fri, 6 Oct 2023 04:00:38 +0000 (UTC) X-FDA: 81313684956.20.232EF30 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf23.hostedemail.com (Postfix) with ESMTP id 7C3A4140007 for ; Fri, 6 Oct 2023 04:00:36 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=none; dmarc=none; spf=none (imf23.hostedemail.com: domain of riel@shelob.surriel.com has no SPF policy when checking 96.67.55.147) smtp.mailfrom=riel@shelob.surriel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696564836; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lMYt1hzc7neBOOFFKz2Z0tvjBs1Evq5ZqJufYut5E7A=; b=6hTcstH4Mt3gA8yykEpN25ga2Y+ljxaKORNz/Cbu4u6jS7KJBNuQdBJ39/bWoikg5SNvk0 WEbUwjvEIFI1tJp95Qq5qukG5gxV5jvJ+rr3EFCO2Z3F34/ces5AuXbihKPmncBFlGy4gi Ww3Jv9OPdIFobG8IQVJUEcQodRO6jT4= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none; dmarc=none; spf=none (imf23.hostedemail.com: domain of riel@shelob.surriel.com has no SPF policy when checking 96.67.55.147) smtp.mailfrom=riel@shelob.surriel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1696564836; a=rsa-sha256; cv=none; b=IzEJNagWcwk+f493fPVwPY33OwgMULL83W1L/n37DbiVkrebJ6PDc1oQmEuquGwTflZLds mokh7q5qGr4Fg8PYPIEqAUcD2l3svCUW/voUHETieLcDabv0afdWP9BwPYgeduE20vvOkf TIU78WcZ9xrYDqoQbVNENA7Wi67HL/Q= Received: from imladris.home.surriel.com ([10.0.13.28] helo=imladris.surriel.com) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.96) (envelope-from ) id 1qoc0k-0000mf-30; Fri, 06 Oct 2023 00:00:22 -0400 From: riel@surriel.com To: linux-kernel@vger.kernel.org Cc: kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, muchun.song@linux.dev, mike.kravetz@oracle.com, leit@meta.com, willy@infradead.org, Rik van Riel , stable@kernel.org Subject: [PATCH 1/4] hugetlbfs: clear resv_map pointer if mmap fails Date: Thu, 5 Oct 2023 23:59:06 -0400 Message-ID: <20231006040020.3677377-2-riel@surriel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231006040020.3677377-1-riel@surriel.com> References: <20231006040020.3677377-1-riel@surriel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 7C3A4140007 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 5xfpdi66yn4iitp8ytdmbqs3ftmkqst5 X-HE-Tag: 1696564836-400510 X-HE-Meta: U2FsdGVkX1+EOtXBd+8AriZswVIdY/YMlNh02vBQosQIbpR8UYFYuMnhOzgCZfBeYlHrYxRU40HS43H76QFh87AskwmtLnDoWDE0CZLrvdXhTXz8yVDGrPqIxUzqUugvmu7BivHtAucmB4+LrBDdD6z5J58caZKmbvtAWiEjHi7nOGWGOyq5iQfaWJbA8ATllnMVuu+cDluXAj2SYvC2Jr5fIrjNXTj3ZD6kSyMdZcyaA8fHUo4m4Dx1OayMUy1szka623V3Fm/y1MTThpaA7iMa+7b1OEJhtrVq9kBwmNJPqgYnPJJjgYTSNqnJcPCMCAqeBt/0d386HKpO+6X5dz6Q+NCxy6IF4kPdNDpAAEtFQvw27DPuYmvTIL2pA2L0eyJgvjl1sjrreJCGyTx4VsYITaY2MkTjBGUJlKlAqgwtT6+qtdsxI0MPNqIQD08HF/zyTrZN137VCEAHbU6f+dIjKJcZJsK+iNiqB/MxTfhjoIq5BScKaRgBEj4boTuQeUcO0TBaRYIpTLSIsV99IXaEmvi5qILYtzuVO1XjanREe3n/uVItYDhDBJYmygOhbH/K2UvMhgszJASGgMZDYecHJmNa185uLoYZCMBnbSLCqPjiBSDPVgQx9EaWcAieLljKST96qA7gJhtNLIz6Thgs0n59iChaA5l3ti/KCJYv0Oe8E3fx1ov2TRO8BNGAi6mXDd4WNrlEEw9oAQHp6quo1mhw09PiIURgIFpq5b7zgdBZWvQddJ4gAYUVm+FhXErC8zeWczjbcAeFNNRjcKetBYjl5hXZ7Hbi2bWl3edU17cgcIBgfwdj6KrsYz53yI3RbfUrMcDwSlQwRLG4QUfNu3hsTh+92PesxbalRsgmNAeFirrkH75rPIepNsiMpM17dxTocPiyY6NasnR4OyNar1Hw+62a+d7L6ZATqOdtJlEsXbqoP0URpQuSe7q2I0HWW76zLwCeIsuZ8R1 v+VHP2YB df05kx5xHPtQ+C3QMU41QFISTIIKvnYAo0TZrJHPd0GhTZjbmU43KuEtVd9lt3c6L7w01wpQWlNxsy9Uudh2bv+UrE8vTwcYt9HeZXLq9EMrbUTugm36xssX53/DgMwCJWTfQAmt9lwWFJMr2m8HZ+2AsgWtQayGwRKEC03sSawRuiTfvkc9/nT1+x9wtgaP+I2nqr794vI4yY4niOoZiO4PMS3zuRzYeZL0u8rO6TgvqrOCLbn+nhRTj5JF5p4QITIONX8BPRLzOb6fx8M3TNKNIxhb/iGMfQyXn/MuIU+oXF3i0G/ICn69CZxJUMiR/G0dNIQs8bhMXgDt1lKi5hiJYytbFY4FZ+l141iPvQkeMWSFu+OvFEtOn/R6JKhN2+zb+2uHa+LInjqQQ+K3zL5J4+gaZo351Zcc6 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Rik van Riel Hugetlbfs leaves a dangling pointer in the VMA if mmap fails. This has not been a problem so far, but other code in this patch series tries to follow that pointer. Signed-off-by: Mike Kravetz Signed-off-by: Rik van Riel Cc: stable@kernel.org Fixes: 04ada095dcfc ("hugetlb: don't delete vma_lock in hugetlb MADV_DONTNEED processing") --- mm/hugetlb.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ba6d39b71cb1..a86e070d735b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1138,8 +1138,7 @@ static void set_vma_resv_map(struct vm_area_struct *vma, struct resv_map *map) VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma); VM_BUG_ON_VMA(vma->vm_flags & VM_MAYSHARE, vma); - set_vma_private_data(vma, (get_vma_private_data(vma) & - HPAGE_RESV_MASK) | (unsigned long)map); + set_vma_private_data(vma, (unsigned long)map); } static void set_vma_resv_flags(struct vm_area_struct *vma, unsigned long flags) @@ -6806,8 +6805,10 @@ bool hugetlb_reserve_pages(struct inode *inode, */ if (chg >= 0 && add < 0) region_abort(resv_map, from, to, regions_needed); - if (vma && is_vma_resv_set(vma, HPAGE_RESV_OWNER)) + if (vma && is_vma_resv_set(vma, HPAGE_RESV_OWNER)) { kref_put(&resv_map->refs, resv_map_release); + set_vma_resv_map(vma, NULL); + } return false; } From patchwork Fri Oct 6 03:59:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13410942 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDDE6E92FCA for ; Fri, 6 Oct 2023 04:00:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0C653940013; Fri, 6 Oct 2023 00:00:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0736F94000B; Fri, 6 Oct 2023 00:00:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E5675940013; Fri, 6 Oct 2023 00:00:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id D3CD194000B for ; Fri, 6 Oct 2023 00:00:38 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 9FF1012016C for ; Fri, 6 Oct 2023 04:00:38 +0000 (UTC) X-FDA: 81313684956.18.2B27A17 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf06.hostedemail.com (Postfix) with ESMTP id A37FE180017 for ; Fri, 6 Oct 2023 04:00:36 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; spf=none (imf06.hostedemail.com: domain of riel@shelob.surriel.com has no SPF policy when checking 96.67.55.147) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1696564836; a=rsa-sha256; cv=none; b=ooZ5SobsgY0BbzZUkCzFX4fdoO/nKHIq1dHSDpmm4FGTtBxjLNT0+OlO8vQXk+nBDx6Hee yEn+8z1kvYExboE1J/2Zj4amMPl5ThjJKx+F8/eXKoUGkLxVPf5Hz8Gd2FiA2nJ8WmUDzi wHSH+78F032Hb1iXzUEDwIefCUH9OK8= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; spf=none (imf06.hostedemail.com: domain of riel@shelob.surriel.com has no SPF policy when checking 96.67.55.147) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696564836; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0/VdX9bl+Xb9k4V/ifVYNOohd2r5cJQt0iSwzjcZa2U=; b=Qqla62vUeqv0WqnU/z9h4u8HgZpGnetzFyGeYGpXzEy74r03P6qsbqw7oe4uhkOhvIglaM KGAYaGDZ+u/sHWmbQ+enOz8E9n33T8gHWZnGhkX+ld5WFZOm6BODhV9GAGuQacOSubEfeU uUQCpXLUrZ+qD+WgfXPk+adm6u7wcG4= Received: from imladris.home.surriel.com ([10.0.13.28] helo=imladris.surriel.com) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.96) (envelope-from ) id 1qoc0k-0000mf-3B; Fri, 06 Oct 2023 00:00:23 -0400 From: riel@surriel.com To: linux-kernel@vger.kernel.org Cc: kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, muchun.song@linux.dev, mike.kravetz@oracle.com, leit@meta.com, willy@infradead.org, Rik van Riel , stable@kernel.org Subject: [PATCH 2/4] hugetlbfs: extend hugetlb_vma_lock to private VMAs Date: Thu, 5 Oct 2023 23:59:07 -0400 Message-ID: <20231006040020.3677377-3-riel@surriel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231006040020.3677377-1-riel@surriel.com> References: <20231006040020.3677377-1-riel@surriel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: A37FE180017 X-Stat-Signature: zdjuow4ntynaqpiw5oqqxq9pqmyp8hc9 X-HE-Tag: 1696564836-548203 X-HE-Meta: U2FsdGVkX19Xg//wbjyBak5Y9IuDTEnFcagQhaupX1ub4kuFZDJuuGxGbWPEtEtUH5bz4eBhKXDyPOSvr9vI4zr4idSHyoTb72zAYpc0B/As8kFNUybhDrVZQUpDtmhSkDLYwhfVwjdbNesWOuH2g66oRdUcbaIFwRgpF5VPUImtNoeIUiytfWzbZ0qp7nMtIrqGqTkUSYP7ec44xLAvs5lctfMjH+IHEPG5GFW5tfCHACk8e6mf/2+Y2EoK59bXojk+L9X1NDMaf8PYq5rLjNykZfsdSb4Vy0JEOVHvRR5amaNIuWbJNihAkeMXpNcyBby/MRqb+4zEOGd+S50fHFw1zXOS2+mpP7lRR8786lbLGEpZ0vBrTYtvqB6KRihjEQTyS8GQ4iiG/Olv4I/k7P6tIyE4mXUw/se5RqB9yYEH/evo9JfqDu1LEFnH83Z446N7FTsD60Y08x+jbL8uT6oI5XNqK2Mw9q9rSV7Su/1Hb4WC8n/b19WV68ylWuyiyH1RY/XPqYlaFYmxx47IbVayZ8WFZIOpYbi8H5C6I5bQfh/UbUPd/IUOYJTkoFhTi1FR5mHdIyxIvS/UXa1hPHxmmoOeIFywbILqdU+LdEdYTSSfiwRCyNrIdg9R+dBZ3tuzmtrbIAtHUxMqbLssip1WRhMg6g6qR6UetrI+xmoHb0/MS+ZTsJQG+iEOA+WwncOuyMGkBGWnhvfEsJDUAhmPJcHjoPKLn6aUqQ12oNXf3XROX7NUld8wWp/fo+L8lTllx+dlQDtlnRsJA3oyUyDbbuYsN6KkXOKAYZp568DnFAfLlH7yaeFVZbFFddx6P1Lio8aYrEKxlildCMEH/7muUBAfRuMlVpcGPrZiogc/zI9lQtlMVEmX8KP2+m/G397rGK9cQJjqbsOPwMdFsx7Rr1Dy9Fdv2ZKPLU6QidJC4kZjSDxPAAR5EK/hYwrSobGlhA0fjMaSQwwIYz0 YiK6tv1n BUoV6RaAmXYq3N2GHQN56lVSeEBzrqBnzc/M1JJKtjF6DXcmeww4on1tmYB7tWXZ95Rc2vN87qae7vbSOy0Z//s7aKakkR0bdMH4hr1RDAUJnJDcVNleps2Y8g1nyP6/ZehMN/jFxnbR8wgkcMGT8xpnB9+Ta6+Vrn0AsVIqV+TziYGB9nt5ghEnmM+yJFJmcK2Z9/st9TvZxJhIQXXXUcDSPGHC4LVWryuBgfB75MbBlxDTkEMBLUZS6Y11vHG7/hfKcqJpiXA7kK0hISZCSNBr+85LdxDPe+7uXo7tH0geB3ZXXmP5qInc0FMU5UiRYTkMH4NzReZylgIlXXNo36ceGISpXmCkde1NQefaV4dOPMW47RCUhx/Ilb0ToN/RPZnTO9Vxo4L/TxhoY2Jtu2ZoT1exVQZdaIcLkVbGnQI8vX7s= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Rik van Riel Extend the locking scheme used to protect shared hugetlb mappings from truncate vs page fault races, in order to protect private hugetlb mappings (with resv_map) against MADV_DONTNEED. Add a read-write semaphore to the resv_map data structure, and use that from the hugetlb_vma_(un)lock_* functions, in preparation for closing the race between MADV_DONTNEED and page faults. Signed-off-by: Rik van Riel Reviewed-by: Mike Kravetz Cc: stable@kernel.org Fixes: 04ada095dcfc ("hugetlb: don't delete vma_lock in hugetlb MADV_DONTNEED processing") --- include/linux/hugetlb.h | 6 ++++++ mm/hugetlb.c | 41 +++++++++++++++++++++++++++++++++++++---- 2 files changed, 43 insertions(+), 4 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 5b2626063f4f..694928fa06a3 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -60,6 +60,7 @@ struct resv_map { long adds_in_progress; struct list_head region_cache; long region_cache_count; + struct rw_semaphore rw_sema; #ifdef CONFIG_CGROUP_HUGETLB /* * On private mappings, the counter to uncharge reservations is stored @@ -1231,6 +1232,11 @@ static inline bool __vma_shareable_lock(struct vm_area_struct *vma) return (vma->vm_flags & VM_MAYSHARE) && vma->vm_private_data; } +static inline bool __vma_private_lock(struct vm_area_struct *vma) +{ + return (!(vma->vm_flags & VM_MAYSHARE)) && vma->vm_private_data; +} + /* * Safe version of huge_pte_offset() to check the locks. See comments * above huge_pte_offset(). diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a86e070d735b..dd3de6ec8f1a 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -97,6 +97,7 @@ static void hugetlb_vma_lock_alloc(struct vm_area_struct *vma); static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma); static void hugetlb_unshare_pmds(struct vm_area_struct *vma, unsigned long start, unsigned long end); +static struct resv_map *vma_resv_map(struct vm_area_struct *vma); static inline bool subpool_is_free(struct hugepage_subpool *spool) { @@ -267,6 +268,10 @@ void hugetlb_vma_lock_read(struct vm_area_struct *vma) struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; down_read(&vma_lock->rw_sema); + } else if (__vma_private_lock(vma)) { + struct resv_map *resv_map = vma_resv_map(vma); + + down_read(&resv_map->rw_sema); } } @@ -276,6 +281,10 @@ void hugetlb_vma_unlock_read(struct vm_area_struct *vma) struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; up_read(&vma_lock->rw_sema); + } else if (__vma_private_lock(vma)) { + struct resv_map *resv_map = vma_resv_map(vma); + + up_read(&resv_map->rw_sema); } } @@ -285,6 +294,10 @@ void hugetlb_vma_lock_write(struct vm_area_struct *vma) struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; down_write(&vma_lock->rw_sema); + } else if (__vma_private_lock(vma)) { + struct resv_map *resv_map = vma_resv_map(vma); + + down_write(&resv_map->rw_sema); } } @@ -294,17 +307,27 @@ void hugetlb_vma_unlock_write(struct vm_area_struct *vma) struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; up_write(&vma_lock->rw_sema); + } else if (__vma_private_lock(vma)) { + struct resv_map *resv_map = vma_resv_map(vma); + + up_write(&resv_map->rw_sema); } } int hugetlb_vma_trylock_write(struct vm_area_struct *vma) { - struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; - if (!__vma_shareable_lock(vma)) - return 1; + if (__vma_shareable_lock(vma)) { + struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; - return down_write_trylock(&vma_lock->rw_sema); + return down_write_trylock(&vma_lock->rw_sema); + } else if (__vma_private_lock(vma)) { + struct resv_map *resv_map = vma_resv_map(vma); + + return down_write_trylock(&resv_map->rw_sema); + } + + return 1; } void hugetlb_vma_assert_locked(struct vm_area_struct *vma) @@ -313,6 +336,10 @@ void hugetlb_vma_assert_locked(struct vm_area_struct *vma) struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; lockdep_assert_held(&vma_lock->rw_sema); + } else if (__vma_private_lock(vma)) { + struct resv_map *resv_map = vma_resv_map(vma); + + lockdep_assert_held(&resv_map->rw_sema); } } @@ -345,6 +372,11 @@ static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma) struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; __hugetlb_vma_unlock_write_put(vma_lock); + } else if (__vma_private_lock(vma)) { + struct resv_map *resv_map = vma_resv_map(vma); + + /* no free for anon vmas, but still need to unlock */ + up_write(&resv_map->rw_sema); } } @@ -1068,6 +1100,7 @@ struct resv_map *resv_map_alloc(void) kref_init(&resv_map->refs); spin_lock_init(&resv_map->lock); INIT_LIST_HEAD(&resv_map->regions); + init_rwsem(&resv_map->rw_sema); resv_map->adds_in_progress = 0; /* From patchwork Fri Oct 6 03:59:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13410944 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CFAEE92FCA for ; Fri, 6 Oct 2023 04:06:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 18C8B940011; Fri, 6 Oct 2023 00:06:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 13CA694000B; Fri, 6 Oct 2023 00:06:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 005D7940011; Fri, 6 Oct 2023 00:06:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E531894000B for ; Fri, 6 Oct 2023 00:06:49 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id A930F12017F for ; Fri, 6 Oct 2023 04:06:49 +0000 (UTC) X-FDA: 81313700538.28.273F999 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf29.hostedemail.com (Postfix) with ESMTP id 1691212001A for ; Fri, 6 Oct 2023 04:06:47 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; dmarc=none; spf=none (imf29.hostedemail.com: domain of riel@shelob.surriel.com has no SPF policy when checking 96.67.55.147) smtp.mailfrom=riel@shelob.surriel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696565208; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3v2sa2I/jNDblB49AcjwnliOJc3UVNmpYzvaB+HZ4GI=; b=60tH3h8HESpcCo/hHkPiXLsoOoJvFjbPH8sw5YbMrCE63BXRa8ohlTDpfXXbmnEeEXPDr2 VeVkUwS3rfgWG4xL4mkuURI1W5i9MNQzqF69qFC+VR/8fmqtlNMSXThqMswNLqrsSKn+A8 qmmdkrd2tpG/ga2pTfQGLasW/SUiaHI= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; dmarc=none; spf=none (imf29.hostedemail.com: domain of riel@shelob.surriel.com has no SPF policy when checking 96.67.55.147) smtp.mailfrom=riel@shelob.surriel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1696565208; a=rsa-sha256; cv=none; b=NCR+PJJHPJhFMOZ9pGV/jjRx1woZ8yOkobq2QixMYQJO0yUtqjY2TEWss6U3ATHoimw14S nczaLSdBL5zFktun+BI9Wfj/LGYXi7ugok+9ReOoPnG/p9jt2V5JsXBXoAL5PmEwk7nlvF snVO2C3nm4Exf3RPyKQZQK7xqxIR6yE= Received: from imladris.home.surriel.com ([10.0.13.28] helo=imladris.surriel.com) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.96) (envelope-from ) id 1qoc0l-0000mf-0C; Fri, 06 Oct 2023 00:00:23 -0400 From: riel@surriel.com To: linux-kernel@vger.kernel.org Cc: kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, muchun.song@linux.dev, mike.kravetz@oracle.com, leit@meta.com, willy@infradead.org, Rik van Riel , stable@kernel.org Subject: [PATCH 3/4] hugetlbfs: close race between MADV_DONTNEED and page fault Date: Thu, 5 Oct 2023 23:59:08 -0400 Message-ID: <20231006040020.3677377-4-riel@surriel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231006040020.3677377-1-riel@surriel.com> References: <20231006040020.3677377-1-riel@surriel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1691212001A X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: n4g6jawk79pufa36erxpz7d6shb33t7w X-HE-Tag: 1696565207-205144 X-HE-Meta: U2FsdGVkX1/Iug8r5TvFLBT4up8Tj+pGQe944URemXQib+lKWvcGrfTDRjK09HvtW1qY2RWcywlYTBSyJ41jbI14ImsNQZzXENw5eutQ8WxQVdRNAK2D6VKpGKLjYzaRZGXI9Zy9jIkWriZabLzDXlf4vg8BC6LjCTV02sG3APhcS0kkb/SEzgo7+RnLBjWk84dCcdzsVT9O0i9YEqulIjZb4XBBAZ98JPZBAuSVeUVBFMNMC8roQzqaF5VIKFDZw32oS4xlxpf3zm9BzPP0Zcy6auYrIkiOzd/YoSLvrCoz3eSEBomhONM9Qbcc6phegcHHhl9RJOZ/WUmtKACvfg/heTU7zgKJXUGLSMxroUgrt4LydbvNuWhjC+GuB+Z0dp3Nq4HpK+hncQ/Dt0mKIv8CG0Y8ixmqW4El7c4ffk+QC5emd4/buHMoi97LqqxCEoq3pI6tZ6a9RQgPDM4ifdI+XXpSpjkM/WyQ+EinNy0Y2QyBgPWAR6ikpdlLqtdxMm3mMvxIuKH8o5T1ZqjF+OckvHsq5U9Dgr3wiE5+tWDE9qq57ok2KxeNGrh01REe+3rlhO1ut5Sl+d6qP3Nu/cvMVUp4L9AdIsGtbSLMkvXIIv3dC0NkHxJX2ygq3gbLxihKd8vpFINe5ACrjiJvHNmjIDjEWLm8FaVlUXntbFxMlkXfAxJhe9N1I7vXHXZhqgCt3KK/LmO6SG+Egnjkca55UUmcS89E3mhomtxhVsQr53n2pEFUAVkqaoVC1EH7Vo4r0yovznswUCuUKNbUev/jR9qe/Qx3oaFugvcWvA3d0ZJRqlWOzFpXWw5TUB7NZGloRWMBHT71qID1SMPT7P94CevIjJQ/pXUNWkwyiO6tXOB2RbxJv7etzegq0vccSwS87r81LvhS6TewA6ynv5UvWy1ZN3dQzFN9hoBharq7EQrsjBneDz96WObI68J+2cMLK8VjHwOR9UrQ8oI u+OVzbiA M9NVu6T94sec2jIkErl8H6vfwuotUuZZWdy2UXHXbFp/VaXC0HLPWIXd4lxxUD1XcGkXQKYskZnjyLj5NhUDUhOyWmQCCkWDkotONzqNAlCgUY+9/ZWK7hItzzHkcYO6cknT0oDeA+Fl3I5SUQqQ3+ylFpGYr80NRBZlYloUbiyoSgadladbqeXm8Trvj35skQaGtisKaQ9pky/83TikvbZGzK+g0aR5x0uLhi+aYGoXhQj0hwc4nlT9fqA8//qeJDy561h3JEA9IkJM5DfHKz/RqTjWFcgxEzHmZyQxed5tkoKglwG/9NXWBuCXMz3Gw/0/e0LMC/I/BHKVfByI4bgfMh2HX9rNJrkfJ9sdXxGUkwUpWlgQZgfj6jwmDk9DeX3z4xQhoju5PAM2kZlM5tVjQVA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Rik van Riel Malloc libraries, like jemalloc and tcalloc, take decisions on when to call madvise independently from the code in the main application. This sometimes results in the application page faulting on an address, right after the malloc library has shot down the backing memory with MADV_DONTNEED. Usually this is harmless, because we always have some 4kB pages sitting around to satisfy a page fault. However, with hugetlbfs systems often allocate only the exact number of huge pages that the application wants. Due to TLB batching, hugetlbfs MADV_DONTNEED will free pages outside of any lock taken on the page fault path, which can open up the following race condition: CPU 1 CPU 2 MADV_DONTNEED unmap page shoot down TLB entry page fault fail to allocate a huge page killed with SIGBUS free page Fix that race by pulling the locking from __unmap_hugepage_final_range into helper functions called from zap_page_range_single. This ensures page faults stay locked out of the MADV_DONTNEED VMA until the huge pages have actually been freed. Signed-off-by: Rik van Riel Cc: stable@kernel.org Fixes: 04ada095dcfc ("hugetlb: don't delete vma_lock in hugetlb MADV_DONTNEED processing") Reviewed-by: Mike Kravetz --- include/linux/hugetlb.h | 35 +++++++++++++++++++++++++++++++++-- mm/hugetlb.c | 34 ++++++++++++++++++++++------------ mm/memory.c | 13 ++++++++----- 3 files changed, 63 insertions(+), 19 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 694928fa06a3..d9ec500cfef9 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -139,7 +139,7 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, void unmap_hugepage_range(struct vm_area_struct *, unsigned long, unsigned long, struct page *, zap_flags_t); -void __unmap_hugepage_range_final(struct mmu_gather *tlb, +void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, struct page *ref_page, zap_flags_t zap_flags); @@ -246,6 +246,25 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma, void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, unsigned long *start, unsigned long *end); +extern void __hugetlb_zap_begin(struct vm_area_struct *vma, + unsigned long *begin, unsigned long *end); +extern void __hugetlb_zap_end(struct vm_area_struct *vma, + struct zap_details *details); + +static inline void hugetlb_zap_begin(struct vm_area_struct *vma, + unsigned long *start, unsigned long *end) +{ + if (is_vm_hugetlb_page(vma)) + __hugetlb_zap_begin(vma, start, end); +} + +static inline void hugetlb_zap_end(struct vm_area_struct *vma, + struct zap_details *details) +{ + if (is_vm_hugetlb_page(vma)) + __hugetlb_zap_end(vma, details); +} + void hugetlb_vma_lock_read(struct vm_area_struct *vma); void hugetlb_vma_unlock_read(struct vm_area_struct *vma); void hugetlb_vma_lock_write(struct vm_area_struct *vma); @@ -297,6 +316,18 @@ static inline void adjust_range_if_pmd_sharing_possible( { } +static inline void hugetlb_zap_begin( + struct vm_area_struct *vma, + unsigned long *start, unsigned long *end) +{ +} + +static inline void hugetlb_zap_end( + struct vm_area_struct *vma, + struct zap_details *details) +{ +} + static inline struct page *hugetlb_follow_page_mask( struct vm_area_struct *vma, unsigned long address, unsigned int flags, unsigned int *page_mask) @@ -442,7 +473,7 @@ static inline long hugetlb_change_protection( return 0; } -static inline void __unmap_hugepage_range_final(struct mmu_gather *tlb, +static inline void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, struct page *ref_page, zap_flags_t zap_flags) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index dd3de6ec8f1a..552c2e3221bd 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5305,9 +5305,9 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, return len + old_addr - old_end; } -static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, - unsigned long start, unsigned long end, - struct page *ref_page, zap_flags_t zap_flags) +void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, + unsigned long start, unsigned long end, + struct page *ref_page, zap_flags_t zap_flags) { struct mm_struct *mm = vma->vm_mm; unsigned long address; @@ -5434,16 +5434,25 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct tlb_flush_mmu_tlbonly(tlb); } -void __unmap_hugepage_range_final(struct mmu_gather *tlb, - struct vm_area_struct *vma, unsigned long start, - unsigned long end, struct page *ref_page, - zap_flags_t zap_flags) +void __hugetlb_zap_begin(struct vm_area_struct *vma, + unsigned long *start, unsigned long *end) { + if (!vma->vm_file) /* hugetlbfs_file_mmap error */ + return; + + adjust_range_if_pmd_sharing_possible(vma, start, end); hugetlb_vma_lock_write(vma); - i_mmap_lock_write(vma->vm_file->f_mapping); + if (vma->vm_file) + i_mmap_lock_write(vma->vm_file->f_mapping); +} - /* mmu notification performed in caller */ - __unmap_hugepage_range(tlb, vma, start, end, ref_page, zap_flags); +void __hugetlb_zap_end(struct vm_area_struct *vma, + struct zap_details *details) +{ + zap_flags_t zap_flags = details ? details->zap_flags : 0; + + if (!vma->vm_file) /* hugetlbfs_file_mmap error */ + return; if (zap_flags & ZAP_FLAG_UNMAP) { /* final unmap */ /* @@ -5456,11 +5465,12 @@ void __unmap_hugepage_range_final(struct mmu_gather *tlb, * someone else. */ __hugetlb_vma_unlock_write_free(vma); - i_mmap_unlock_write(vma->vm_file->f_mapping); } else { - i_mmap_unlock_write(vma->vm_file->f_mapping); hugetlb_vma_unlock_write(vma); } + + if (vma->vm_file) + i_mmap_unlock_write(vma->vm_file->f_mapping); } void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, diff --git a/mm/memory.c b/mm/memory.c index 6c264d2f969c..517221f01303 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1683,7 +1683,7 @@ static void unmap_single_vma(struct mmu_gather *tlb, if (vma->vm_file) { zap_flags_t zap_flags = details ? details->zap_flags : 0; - __unmap_hugepage_range_final(tlb, vma, start, end, + __unmap_hugepage_range(tlb, vma, start, end, NULL, zap_flags); } } else @@ -1728,8 +1728,12 @@ void unmap_vmas(struct mmu_gather *tlb, struct ma_state *mas, start_addr, end_addr); mmu_notifier_invalidate_range_start(&range); do { - unmap_single_vma(tlb, vma, start_addr, end_addr, &details, + unsigned long start = start_addr; + unsigned long end = end_addr; + hugetlb_zap_begin(vma, &start, &end); + unmap_single_vma(tlb, vma, start, end, &details, mm_wr_locked); + hugetlb_zap_end(vma, &details); } while ((vma = mas_find(mas, tree_end - 1)) != NULL); mmu_notifier_invalidate_range_end(&range); } @@ -1753,9 +1757,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address, lru_add_drain(); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm, address, end); - if (is_vm_hugetlb_page(vma)) - adjust_range_if_pmd_sharing_possible(vma, &range.start, - &range.end); + hugetlb_zap_begin(vma, &range.start, &range.end); tlb_gather_mmu(&tlb, vma->vm_mm); update_hiwater_rss(vma->vm_mm); mmu_notifier_invalidate_range_start(&range); @@ -1766,6 +1768,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address, unmap_single_vma(&tlb, vma, address, end, details, false); mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); + hugetlb_zap_end(vma, details); } /** From patchwork Fri Oct 6 03:59:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13410943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90A5DE92FCD for ; Fri, 6 Oct 2023 04:00:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A6091940015; Fri, 6 Oct 2023 00:00:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9EA4694000B; Fri, 6 Oct 2023 00:00:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A001940015; Fri, 6 Oct 2023 00:00:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 67AE094000B for ; Fri, 6 Oct 2023 00:00:40 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 2F8C8C015F for ; Fri, 6 Oct 2023 04:00:40 +0000 (UTC) X-FDA: 81313685040.02.09B490A Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf11.hostedemail.com (Postfix) with ESMTP id 6B2D340012 for ; Fri, 6 Oct 2023 04:00:38 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; spf=none (imf11.hostedemail.com: domain of riel@shelob.surriel.com has no SPF policy when checking 96.67.55.147) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696564838; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vXoXPXIGG/hYgLZnUs/SK/cb/Tf3dk1Bw29S1IJUAic=; b=W+JEj7BZVaBtMn0nYvCqT7AueJYlAtA5C/BPY+ri7Un6ZKAAaHQVLpZnoOXN7ZheqkOY11 sjJeWs+9nDI9qzjzr9w5Fa85VHSdOLNoquIqSs/GkhyAejz6+73jk2fczt9rGMFfdwZOLn y3b9HnxrYBv1HGUQkAjZnOUVPa0l1jE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1696564838; a=rsa-sha256; cv=none; b=yDy169hSdUYefoDC7SLoHuQnqCS6+B0c5wtN+g0id7J1wxMsJ4+BCtu57ho6sWB2yUO9Jd 8JznyblP36b1E0OVWYhsgIr7ryoLIoQ7pdRyfI+MneYJS0EF7Eh4c3mi0Y6ap1ua03iXfb GP/fd/I+koN5v/wNqOGKcN3fuJRD+ns= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; spf=none (imf11.hostedemail.com: domain of riel@shelob.surriel.com has no SPF policy when checking 96.67.55.147) smtp.mailfrom=riel@shelob.surriel.com; dmarc=none Received: from imladris.home.surriel.com ([10.0.13.28] helo=imladris.surriel.com) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.96) (envelope-from ) id 1qoc0l-0000mf-0Q; Fri, 06 Oct 2023 00:00:23 -0400 From: riel@surriel.com To: linux-kernel@vger.kernel.org Cc: kernel-team@meta.com, linux-mm@kvack.org, akpm@linux-foundation.org, muchun.song@linux.dev, mike.kravetz@oracle.com, leit@meta.com, willy@infradead.org, Rik van Riel Subject: [PATCH 4/4] hugetlbfs: replace hugetlb_vma_lock with invalidate_lock Date: Thu, 5 Oct 2023 23:59:09 -0400 Message-ID: <20231006040020.3677377-5-riel@surriel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231006040020.3677377-1-riel@surriel.com> References: <20231006040020.3677377-1-riel@surriel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 6B2D340012 X-Rspam-User: X-Stat-Signature: ogrc1pj4e46wdrmhbsqh5x7izn96jfao X-Rspamd-Server: rspam03 X-HE-Tag: 1696564838-499863 X-HE-Meta: U2FsdGVkX1/3GD9sB1PJIN/+N91556v7gb9HO0GoJOkmjMwirair6JKznLP/Hua/3NXSBfLuozHgk3YEKQMTXMS10d1nSAG2/fB0ZVuYuUwqL72s70sDWZyLdc9oveH/G8vYWcTMilA7G+XIHQ51k0up0BTTSkw92rtXQOhQYQyPVJfvK4CHVLjMmPfscHPAJxLHaH4y3GoUXbjihgULc3v+JM6NEYmD/EpwWIrzlD4h8SipFwxbWWKjhj+ACUcQr1wPvY3eaOY5KHjMYe5F2YFfDxx0tSeFWxnDSJBZyxxqXiy9cuDfFJbTgUs4Cx1y0eVdJuWRIbKCBLltIROavGZwKrXEJntf8Xkl8I7z2doDfIKHQFcj9EHmlpuTkpjQnD7mCE/uPQx7IaLpYwwfWLQhKqulOpfuTOwT3RQuvzVpXNL/ETUTZy48VWHP4hybc80iSIPJSAt+sxjOyE93bvbPfEx6vNETTxuX9VSJJWR3dy/ycqch+NJYYmoz9500tLUBbs6SpsE2VsJoXExfvUEGN5lVQqj8a0jMlUwSBVrx3B+HOvDyzPmPyk2OMXyXcd185byFej8P33lJ4eJyvb94EFZSn82nkg48UtiyLaRzqkOF6W5UnObmhJwwMM6GR9hQEbmmi9fHIL6udMNMclI4Y4JKEnrwCcaIoLoCc8fwum60jnuh8fRFcUQsO//5UmtQ9zgu6TNeBQBURPc7V77SDH4CBj057QcUKhrWTErxnHGKTKlKXMZ9HvXPx7OTm6uOZrfVsdoFJ9XQ7EDM9x/k2rOLcsH8Rse0FvwN7vKMh3c3MR4A89q68mhQgmAFlnZP2C8+Smvhm6DHm9/9KXDAroMwFnd9OKrDjWOftt5rQf8Zuf1BFL3W4ncc8hYU73wS7kWxMWbgN0Bm/T4qdzvxAgcwajmOKjTSE3j9r9+820gVG4bwAfbK56Q5DVwQnmEujfFKXPv5DhR3/0B bp8vdgxz 6VtAW8DApjTdi5DgbtA7Zv2hLhSoghYKFZEIp1aVQBxcXoXtuvR7x5jQsl948uoLbeATkQADsaNn0gPhuHSFc5SJ7SmVTTGftFULmUPXmxxtW8kPGPx+E2QB2UbhqGZiIqdZKcm4Y8RdesofQE5jIQ0BFKh//XBaV81v+fDXEZhxXf2vqhE9zTjsLLBfMmyc/SOQYlzTQY5D4uz8bsj/9NV7WFmST0Ei6CrxMuJIgMH4DO/ydtKUGjiWAZEkIRMS7yDyQl4QqFk5iPfSPxjsMPA/CI6eRD5nEDFCiVE6hYfkop0F0t23UBWp5QqRaIa/A6I1mDZZe36+LkbvALQgpnD90Qr1+1rJkQnKqRDW9y1prPI7ukr159r+cfY+QuP+DE7AeHak42uqNO4c= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Rik van Riel Replace the custom hugetlbfs VMA locking code with the recently introduced invalidate_lock. This greatly simplifies things. However, this is a large enough change that it should probably go in separately from the other changes. Another question is whether this simplification hurts scalability for certain workloads. Suggested-by: Matthew Wilcox Signed-off-by: Rik van Riel --- fs/hugetlbfs/inode.c | 71 ++---------- include/linux/fs.h | 6 + include/linux/hugetlb.h | 21 +--- mm/hugetlb.c | 237 ++++------------------------------------ 4 files changed, 35 insertions(+), 300 deletions(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 316c4cebd3f3..18a66632d789 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -485,7 +485,6 @@ static void hugetlb_unmap_file_folio(struct hstate *h, struct folio *folio, pgoff_t index) { struct rb_root_cached *root = &mapping->i_mmap; - struct hugetlb_vma_lock *vma_lock; struct page *page = &folio->page; struct vm_area_struct *vma; unsigned long v_start; @@ -495,9 +494,9 @@ static void hugetlb_unmap_file_folio(struct hstate *h, start = index * pages_per_huge_page(h); end = (index + 1) * pages_per_huge_page(h); + filemap_invalidate_lock(mapping); i_mmap_lock_write(mapping); -retry: - vma_lock = NULL; + vma_interval_tree_foreach(vma, root, start, end - 1) { v_start = vma_offset_start(vma, start); v_end = vma_offset_end(vma, end); @@ -505,62 +504,12 @@ static void hugetlb_unmap_file_folio(struct hstate *h, if (!hugetlb_vma_maps_page(vma, v_start, page)) continue; - if (!hugetlb_vma_trylock_write(vma)) { - vma_lock = vma->vm_private_data; - /* - * If we can not get vma lock, we need to drop - * immap_sema and take locks in order. First, - * take a ref on the vma_lock structure so that - * we can be guaranteed it will not go away when - * dropping immap_sema. - */ - kref_get(&vma_lock->refs); - break; - } - unmap_hugepage_range(vma, v_start, v_end, NULL, ZAP_FLAG_DROP_MARKER); - hugetlb_vma_unlock_write(vma); } + filemap_invalidate_unlock(mapping); i_mmap_unlock_write(mapping); - - if (vma_lock) { - /* - * Wait on vma_lock. We know it is still valid as we have - * a reference. We must 'open code' vma locking as we do - * not know if vma_lock is still attached to vma. - */ - down_write(&vma_lock->rw_sema); - i_mmap_lock_write(mapping); - - vma = vma_lock->vma; - if (!vma) { - /* - * If lock is no longer attached to vma, then just - * unlock, drop our reference and retry looking for - * other vmas. - */ - up_write(&vma_lock->rw_sema); - kref_put(&vma_lock->refs, hugetlb_vma_lock_release); - goto retry; - } - - /* - * vma_lock is still attached to vma. Check to see if vma - * still maps page and if so, unmap. - */ - v_start = vma_offset_start(vma, start); - v_end = vma_offset_end(vma, end); - if (hugetlb_vma_maps_page(vma, v_start, page)) - unmap_hugepage_range(vma, v_start, v_end, NULL, - ZAP_FLAG_DROP_MARKER); - - kref_put(&vma_lock->refs, hugetlb_vma_lock_release); - hugetlb_vma_unlock_write(vma); - - goto retry; - } } static void @@ -578,20 +527,10 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end, unsigned long v_start; unsigned long v_end; - if (!hugetlb_vma_trylock_write(vma)) - continue; - v_start = vma_offset_start(vma, start); v_end = vma_offset_end(vma, end); unmap_hugepage_range(vma, v_start, v_end, NULL, zap_flags); - - /* - * Note that vma lock only exists for shared/non-private - * vmas. Therefore, lock is not held when calling - * unmap_hugepage_range for private vmas. - */ - hugetlb_vma_unlock_write(vma); } } @@ -725,10 +664,12 @@ static void hugetlb_vmtruncate(struct inode *inode, loff_t offset) pgoff = offset >> PAGE_SHIFT; i_size_write(inode, offset); + filemap_invalidate_lock(mapping); i_mmap_lock_write(mapping); if (!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root)) hugetlb_vmdelete_list(&mapping->i_mmap, pgoff, 0, ZAP_FLAG_DROP_MARKER); + filemap_invalidate_unlock(mapping); i_mmap_unlock_write(mapping); remove_inode_hugepages(inode, offset, LLONG_MAX); } @@ -778,6 +719,7 @@ static long hugetlbfs_punch_hole(struct inode *inode, loff_t offset, loff_t len) return -EPERM; } + filemap_invalidate_lock(mapping); i_mmap_lock_write(mapping); /* If range starts before first full page, zero partial page. */ @@ -799,6 +741,7 @@ static long hugetlbfs_punch_hole(struct inode *inode, loff_t offset, loff_t len) hole_end, offset + len); i_mmap_unlock_write(mapping); + filemap_invalidate_unlock(mapping); /* Remove full pages from the file. */ if (hole_end > hole_start) diff --git a/include/linux/fs.h b/include/linux/fs.h index 4aeb3fa11927..b455a8913db4 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -847,6 +847,12 @@ static inline void filemap_invalidate_lock(struct address_space *mapping) down_write(&mapping->invalidate_lock); } +static inline int filemap_invalidate_trylock( + struct address_space *mapping) +{ + return down_write_trylock(&mapping->invalidate_lock); +} + static inline void filemap_invalidate_unlock(struct address_space *mapping) { up_write(&mapping->invalidate_lock); diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index d9ec500cfef9..2908c47e7bf2 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -60,7 +60,6 @@ struct resv_map { long adds_in_progress; struct list_head region_cache; long region_cache_count; - struct rw_semaphore rw_sema; #ifdef CONFIG_CGROUP_HUGETLB /* * On private mappings, the counter to uncharge reservations is stored @@ -107,12 +106,6 @@ struct file_region { #endif }; -struct hugetlb_vma_lock { - struct kref refs; - struct rw_semaphore rw_sema; - struct vm_area_struct *vma; -}; - extern struct resv_map *resv_map_alloc(void); void resv_map_release(struct kref *ref); @@ -1277,17 +1270,9 @@ hugetlb_walk(struct vm_area_struct *vma, unsigned long addr, unsigned long sz) { #if defined(CONFIG_HUGETLB_PAGE) && \ defined(CONFIG_ARCH_WANT_HUGE_PMD_SHARE) && defined(CONFIG_LOCKDEP) - struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; - - /* - * If pmd sharing possible, locking needed to safely walk the - * hugetlb pgtables. More information can be found at the comment - * above huge_pte_offset() in the same file. - * - * NOTE: lockdep_is_held() is only defined with CONFIG_LOCKDEP. - */ - if (__vma_shareable_lock(vma)) - WARN_ON_ONCE(!lockdep_is_held(&vma_lock->rw_sema) && + if (vma->vm_file) + WARN_ON_ONCE(!lockdep_is_held( + &vma->vm_file->f_mapping->invalidate_lock) && !lockdep_is_held( &vma->vm_file->f_mapping->i_mmap_rwsem)); #endif diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 552c2e3221bd..0dcaccc29e97 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -92,9 +92,6 @@ struct mutex *hugetlb_fault_mutex_table ____cacheline_aligned_in_smp; /* Forward declaration */ static int hugetlb_acct_memory(struct hstate *h, long delta); -static void hugetlb_vma_lock_free(struct vm_area_struct *vma); -static void hugetlb_vma_lock_alloc(struct vm_area_struct *vma); -static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma); static void hugetlb_unshare_pmds(struct vm_area_struct *vma, unsigned long start, unsigned long end); static struct resv_map *vma_resv_map(struct vm_area_struct *vma); @@ -264,170 +261,41 @@ static inline struct hugepage_subpool *subpool_vma(struct vm_area_struct *vma) */ void hugetlb_vma_lock_read(struct vm_area_struct *vma) { - if (__vma_shareable_lock(vma)) { - struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; - - down_read(&vma_lock->rw_sema); - } else if (__vma_private_lock(vma)) { - struct resv_map *resv_map = vma_resv_map(vma); - - down_read(&resv_map->rw_sema); - } + if (vma->vm_file) + filemap_invalidate_lock_shared(vma->vm_file->f_mapping); } void hugetlb_vma_unlock_read(struct vm_area_struct *vma) { - if (__vma_shareable_lock(vma)) { - struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; - - up_read(&vma_lock->rw_sema); - } else if (__vma_private_lock(vma)) { - struct resv_map *resv_map = vma_resv_map(vma); - - up_read(&resv_map->rw_sema); - } + if (vma->vm_file) + filemap_invalidate_unlock_shared(vma->vm_file->f_mapping); } void hugetlb_vma_lock_write(struct vm_area_struct *vma) { - if (__vma_shareable_lock(vma)) { - struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; - - down_write(&vma_lock->rw_sema); - } else if (__vma_private_lock(vma)) { - struct resv_map *resv_map = vma_resv_map(vma); - - down_write(&resv_map->rw_sema); - } + if (vma->vm_file) + filemap_invalidate_lock(vma->vm_file->f_mapping); } void hugetlb_vma_unlock_write(struct vm_area_struct *vma) { - if (__vma_shareable_lock(vma)) { - struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; - - up_write(&vma_lock->rw_sema); - } else if (__vma_private_lock(vma)) { - struct resv_map *resv_map = vma_resv_map(vma); - - up_write(&resv_map->rw_sema); - } + if (vma->vm_file) + filemap_invalidate_unlock(vma->vm_file->f_mapping); } int hugetlb_vma_trylock_write(struct vm_area_struct *vma) { - if (__vma_shareable_lock(vma)) { - struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; - - return down_write_trylock(&vma_lock->rw_sema); - } else if (__vma_private_lock(vma)) { - struct resv_map *resv_map = vma_resv_map(vma); - - return down_write_trylock(&resv_map->rw_sema); - } + if (vma->vm_file) + return filemap_invalidate_trylock(vma->vm_file->f_mapping); return 1; } void hugetlb_vma_assert_locked(struct vm_area_struct *vma) { - if (__vma_shareable_lock(vma)) { - struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; - - lockdep_assert_held(&vma_lock->rw_sema); - } else if (__vma_private_lock(vma)) { - struct resv_map *resv_map = vma_resv_map(vma); - - lockdep_assert_held(&resv_map->rw_sema); - } -} - -void hugetlb_vma_lock_release(struct kref *kref) -{ - struct hugetlb_vma_lock *vma_lock = container_of(kref, - struct hugetlb_vma_lock, refs); - - kfree(vma_lock); -} - -static void __hugetlb_vma_unlock_write_put(struct hugetlb_vma_lock *vma_lock) -{ - struct vm_area_struct *vma = vma_lock->vma; - - /* - * vma_lock structure may or not be released as a result of put, - * it certainly will no longer be attached to vma so clear pointer. - * Semaphore synchronizes access to vma_lock->vma field. - */ - vma_lock->vma = NULL; - vma->vm_private_data = NULL; - up_write(&vma_lock->rw_sema); - kref_put(&vma_lock->refs, hugetlb_vma_lock_release); -} - -static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma) -{ - if (__vma_shareable_lock(vma)) { - struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; - - __hugetlb_vma_unlock_write_put(vma_lock); - } else if (__vma_private_lock(vma)) { - struct resv_map *resv_map = vma_resv_map(vma); - - /* no free for anon vmas, but still need to unlock */ - up_write(&resv_map->rw_sema); - } -} - -static void hugetlb_vma_lock_free(struct vm_area_struct *vma) -{ - /* - * Only present in sharable vmas. - */ - if (!vma || !__vma_shareable_lock(vma)) - return; - - if (vma->vm_private_data) { - struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; - - down_write(&vma_lock->rw_sema); - __hugetlb_vma_unlock_write_put(vma_lock); - } -} - -static void hugetlb_vma_lock_alloc(struct vm_area_struct *vma) -{ - struct hugetlb_vma_lock *vma_lock; - - /* Only establish in (flags) sharable vmas */ - if (!vma || !(vma->vm_flags & VM_MAYSHARE)) - return; - - /* Should never get here with non-NULL vm_private_data */ - if (vma->vm_private_data) - return; - - vma_lock = kmalloc(sizeof(*vma_lock), GFP_KERNEL); - if (!vma_lock) { - /* - * If we can not allocate structure, then vma can not - * participate in pmd sharing. This is only a possible - * performance enhancement and memory saving issue. - * However, the lock is also used to synchronize page - * faults with truncation. If the lock is not present, - * unlikely races could leave pages in a file past i_size - * until the file is removed. Warn in the unlikely case of - * allocation failure. - */ - pr_warn_once("HugeTLB: unable to allocate vma specific lock\n"); - return; - } - - kref_init(&vma_lock->refs); - init_rwsem(&vma_lock->rw_sema); - vma_lock->vma = vma; - vma->vm_private_data = vma_lock; + if (vma->vm_file) + lockdep_assert_held(&vma->vm_file->f_mapping->invalidate_lock); } /* Helper that removes a struct file_region from the resv_map cache and returns @@ -1100,7 +968,6 @@ struct resv_map *resv_map_alloc(void) kref_init(&resv_map->refs); spin_lock_init(&resv_map->lock); INIT_LIST_HEAD(&resv_map->regions); - init_rwsem(&resv_map->rw_sema); resv_map->adds_in_progress = 0; /* @@ -1194,22 +1061,11 @@ void hugetlb_dup_vma_private(struct vm_area_struct *vma) VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma); /* * Clear vm_private_data - * - For shared mappings this is a per-vma semaphore that may be - * allocated in a subsequent call to hugetlb_vm_op_open. - * Before clearing, make sure pointer is not associated with vma - * as this will leak the structure. This is the case when called - * via clear_vma_resv_huge_pages() and hugetlb_vm_op_open has already - * been called to allocate a new structure. * - For MAP_PRIVATE mappings, this is the reserve map which does * not apply to children. Faults generated by the children are * not guaranteed to succeed, even if read-only. */ - if (vma->vm_flags & VM_MAYSHARE) { - struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; - - if (vma_lock && vma_lock->vma != vma) - vma->vm_private_data = NULL; - } else + if (!(vma->vm_flags & VM_MAYSHARE)) vma->vm_private_data = NULL; } @@ -4845,25 +4701,6 @@ static void hugetlb_vm_op_open(struct vm_area_struct *vma) resv_map_dup_hugetlb_cgroup_uncharge_info(resv); kref_get(&resv->refs); } - - /* - * vma_lock structure for sharable mappings is vma specific. - * Clear old pointer (if copied via vm_area_dup) and allocate - * new structure. Before clearing, make sure vma_lock is not - * for this vma. - */ - if (vma->vm_flags & VM_MAYSHARE) { - struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; - - if (vma_lock) { - if (vma_lock->vma != vma) { - vma->vm_private_data = NULL; - hugetlb_vma_lock_alloc(vma); - } else - pr_warn("HugeTLB: vma_lock already exists in %s.\n", __func__); - } else - hugetlb_vma_lock_alloc(vma); - } } static void hugetlb_vm_op_close(struct vm_area_struct *vma) @@ -4874,8 +4711,6 @@ static void hugetlb_vm_op_close(struct vm_area_struct *vma) unsigned long reserve, start, end; long gbl_reserve; - hugetlb_vma_lock_free(vma); - resv = vma_resv_map(vma); if (!resv || !is_vma_resv_set(vma, HPAGE_RESV_OWNER)) return; @@ -5047,16 +4882,10 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, mmu_notifier_invalidate_range_start(&range); vma_assert_write_locked(src_vma); raw_write_seqcount_begin(&src->write_protect_seq); - } else { - /* - * For shared mappings the vma lock must be held before - * calling hugetlb_walk() in the src vma. Otherwise, the - * returned ptep could go away if part of a shared pmd and - * another thread calls huge_pmd_unshare. - */ - hugetlb_vma_lock_read(src_vma); } + hugetlb_vma_lock_read(src_vma); + last_addr_mask = hugetlb_mask_last_page(h); for (addr = src_vma->vm_start; addr < src_vma->vm_end; addr += sz) { spinlock_t *src_ptl, *dst_ptl; @@ -5208,10 +5037,10 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, if (cow) { raw_write_seqcount_end(&src->write_protect_seq); mmu_notifier_invalidate_range_end(&range); - } else { - hugetlb_vma_unlock_read(src_vma); } + hugetlb_vma_unlock_read(src_vma); + return ret; } @@ -5449,28 +5278,12 @@ void __hugetlb_zap_begin(struct vm_area_struct *vma, void __hugetlb_zap_end(struct vm_area_struct *vma, struct zap_details *details) { - zap_flags_t zap_flags = details ? details->zap_flags : 0; - if (!vma->vm_file) /* hugetlbfs_file_mmap error */ return; - if (zap_flags & ZAP_FLAG_UNMAP) { /* final unmap */ - /* - * Unlock and free the vma lock before releasing i_mmap_rwsem. - * When the vma_lock is freed, this makes the vma ineligible - * for pmd sharing. And, i_mmap_rwsem is required to set up - * pmd sharing. This is important as page tables for this - * unmapped range will be asynchrously deleted. If the page - * tables are shared, there will be issues when accessed by - * someone else. - */ - __hugetlb_vma_unlock_write_free(vma); - } else { - hugetlb_vma_unlock_write(vma); - } - if (vma->vm_file) i_mmap_unlock_write(vma->vm_file->f_mapping); + hugetlb_vma_unlock_write(vma); } void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, @@ -6713,12 +6526,6 @@ bool hugetlb_reserve_pages(struct inode *inode, return false; } - /* - * vma specific semaphore used for pmd sharing and fault/truncation - * synchronization - */ - hugetlb_vma_lock_alloc(vma); - /* * Only apply hugepage reservation if asked. At fault time, an * attempt will be made for VM_NORESERVE to allocate a page @@ -6841,7 +6648,6 @@ bool hugetlb_reserve_pages(struct inode *inode, hugetlb_cgroup_uncharge_cgroup_rsvd(hstate_index(h), chg * pages_per_huge_page(h), h_cg); out_err: - hugetlb_vma_lock_free(vma); if (!vma || vma->vm_flags & VM_MAYSHARE) /* Only call region_abort if the region_chg succeeded but the * region_add failed or didn't run. @@ -6913,13 +6719,10 @@ static unsigned long page_table_shareable(struct vm_area_struct *svma, /* * match the virtual addresses, permission and the alignment of the * page table page. - * - * Also, vma_lock (vm_private_data) is required for sharing. */ if (pmd_index(addr) != pmd_index(saddr) || vm_flags != svm_flags || - !range_in_vma(svma, sbase, s_end) || - !svma->vm_private_data) + !range_in_vma(svma, sbase, s_end)) return 0; return saddr; @@ -6939,8 +6742,6 @@ bool want_pmd_share(struct vm_area_struct *vma, unsigned long addr) */ if (!(vma->vm_flags & VM_MAYSHARE)) return false; - if (!vma->vm_private_data) /* vma lock required for sharing */ - return false; if (!range_in_vma(vma, start, end)) return false; return true;