From patchwork Wed Jun 28 17:25:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13296089 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4ED7AEB64D7 for ; Wed, 28 Jun 2023 17:25:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9C1828D0001; Wed, 28 Jun 2023 13:25:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 971DD8E0002; Wed, 28 Jun 2023 13:25:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 751468D0008; Wed, 28 Jun 2023 13:25:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 63D748D0001 for ; Wed, 28 Jun 2023 13:25:49 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 19DC31602B7 for ; Wed, 28 Jun 2023 17:25:49 +0000 (UTC) X-FDA: 80952834018.22.6A2A000 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf29.hostedemail.com (Postfix) with ESMTP id 3EDF012000D for ; Wed, 28 Jun 2023 17:25:47 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=cv9vKOTj; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of 3Gm2cZAYKCJEDFCz8w19916z.x97638FI-775Gvx5.9C1@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3Gm2cZAYKCJEDFCz8w19916z.x97638FI-775Gvx5.9C1@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687973147; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OlfhR7EToLpThGeQT880X4R1DISXmAyjsxAIr0E3t3A=; b=fRzRwH52LYRMmEz6MwpmGaid5HNFDImXnCgQtMjy3naDF23kXCKOvz3iYPNlIdDsDgQq4J JnBd5Mm4ySoGBEAwLmGQ9oJwPb06GKzxBg8aFyCSMu4moVcZqVpMPQFJK+5i3nIJSVrbYB nXhQcU5ZGkH3QxfQYfzlGIM6F3PEu/Q= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=cv9vKOTj; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of 3Gm2cZAYKCJEDFCz8w19916z.x97638FI-775Gvx5.9C1@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3Gm2cZAYKCJEDFCz8w19916z.x97638FI-775Gvx5.9C1@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687973147; a=rsa-sha256; cv=none; b=0LCjfqsDv2aqayqR/akGyNOLYBawa9NATo1SucxciJBFz04GIRotbh5oLo9HwStErDeFfI d91iOIp9yoWTJVilnCHqFTQzspPZ5YyqNX8ICefTBsUD7CpMlJDkBnSjpBiC2963AUpcHV d4ghyD9h52Ab9m0My0C1cEtTC/qmVCE= Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-be47a3b3a01so7364578276.2 for ; Wed, 28 Jun 2023 10:25:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687973146; x=1690565146; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OlfhR7EToLpThGeQT880X4R1DISXmAyjsxAIr0E3t3A=; b=cv9vKOTj1uWilOQw2ueM+a7smhsXMKEOe4ULfV9/3oHnfMC+VyWkg0YagZLfag3QHt VTJVW68wDX5A6qNCjY1/v0NLpgB8pHfHimYDj35/V7zXaqiOra1xeA1GkBbA67NjquPW x1QSTu4uNaPPEroE9IurX9RKH4SkOXwFavLPUCF6hvBPJKloBSUU4o+6g7KIM6M/Kdrl Ee89eRG8kSHNaxe/bYk8hYg9yrrFR5dU/fxBHCJQ0CcEb54uf2JQBXMie2rfUmkqeUKw huYvjvg5vTodCn7//hTPnzWx41V+3ZeyKFUf8CEnL3AFwmjDEeS4lOCQ4Q4hhBG07yIU YDlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687973146; x=1690565146; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OlfhR7EToLpThGeQT880X4R1DISXmAyjsxAIr0E3t3A=; b=WYu5U4HZtFv/pnasJeWOA2/Up9l1GLYPDFalIUdr5bzRUxFYeiJzLkdg33eLUor7WF GGjUCsaSLwMWzq8P94NKLL7THJgD0kh5BSJGmYYJJ6kebSOt8E0WVXwWYpyXaMYFTSof zt0+wqTyPqeeYT4cyMLJcTmIpyt/vbSaBxLITqOwd2aiyYTzO0oENPla25IQKcLq4a90 0SdFjqGhw3ysop3ln3BBRhqET6hb1lCPyMJoM4pZPzlSniKc3dFUImaThCwh9AX+jihH FFt+IK46nwzWkTCiAeMksmLUY3FA76yvOVMAfcYs9NULRSqDBSpUkl7qDs/pmA1XkL4y N5+A== X-Gm-Message-State: AC+VfDxbHYT8FpD7OfwbVWUQfNNMyutuL2+ZKzKz5GC+Mp+tRnaHdRd1 r5t2R7aEsJCu0wGTzeXeLXL37GFJ2h8= X-Google-Smtp-Source: ACHHUZ6mVzUq9ryDcwh2Xte1sV3YO3u9EdCX8f+cPy2W6N1hipmOz/QDUBjgoPGi9sK1StwJ6zYhxS8iBnQ= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:eea3:e898:7d7a:1125]) (user=surenb job=sendgmr) by 2002:a25:5091:0:b0:c1a:dc07:1d74 with SMTP id e139-20020a255091000000b00c1adc071d74mr3744020ybb.0.1687973146287; Wed, 28 Jun 2023 10:25:46 -0700 (PDT) Date: Wed, 28 Jun 2023 10:25:29 -0700 In-Reply-To: <20230628172529.744839-1-surenb@google.com> Mime-Version: 1.0 References: <20230628172529.744839-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230628172529.744839-7-surenb@google.com> Subject: [PATCH v5 6/6] mm: handle userfaults under VMA lock From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 3EDF012000D X-Stat-Signature: zuy6o3kf437zy8o7d51m3zcqq4yegyhz X-HE-Tag: 1687973147-253244 X-HE-Meta: U2FsdGVkX1+RqQrBaOnuDyg0bj4KB9xFQFS8tOYiTP4GTyZ5iYo4YeXz9A1vQRvA30ZV0yJLgj5TJQubT00IEydtn+vA/ujOT72o5TnWsKoo8WSu/LZTkzVNmaJKfGcWlhYmsMyqN8siZp3pMlYnbOAwOmDL+cHQqLZh+44zi5equ+8XglU7oJEkRlsEZ+GRuY+n7hfrLE+TZLGARH3QRNsXKzQbV0XjN4dyLHjBBMUEuA3AqQmi6Q1ZHjCjy2N+387+ccnlP/jcJfIeDFKFRXTKGdW9X02SCyyq40/L/yGH+3ZyS6j5krNIqvwD21nj41bFkmKlUFzTsRnEOestEEo1maNIo3nryD5Vxprvexqux0b/NcXBblcQ7EQXFQDHOWexF/ZhuJrRcxAJzLAA6ZsiPrsYO3Bbc/1Oz0j060EGU/UKevCiVveoI4T/Fnv5ssNr9q8+cWgrevUd14La381kzDu1QQqUpvv9lQ8ESz+kYEQzDPXJGH9+r5tP6O1QiV8ONddSaNx7R1Zc8f78S9wbNZFng11W9cx5x5hYUhI7BamaN7kjVeXYv8rp3bYMYjvozx/eUw0IAHbIE6OY4hTodMeLVvUwNSfj9DR+xP13GGoKjR0crYl3Qih5Wacwu8viKL7RbnuYiX6+wivpShwpBd0HYcTynDUx2a9rrEwISHxizO1qmxXNZO/cA6ZCphBhgHmLvqVrINDYEXIUUPZbjAy47lFMRkdqqKYuk8TmZXTRIROJA8TD434/bRr3Jjj3UqqcDrDCZkrySeDD2A7UrCFlDhI1AVq/IwmuXh23tZbMiJ/CP+a0XKNRSnmfw8IwWPEW2LGg+sUh11j0N9Ldr8LPh0Z1LcM5Ar8nfXeoKMScG/zSGnLjiHS2KJJD5wyixeHxU3QDJ9KikwZkce51O0cCSWyvVS5HMqVxEmXGIsNq0/oq47ipAtp9Lxz4JlTbYW/LJtMBMB3TzZa BvvgGZNh PypOwOuROv/bBIkyJ+XMsZQzsajnDxvovQ/EP4AV6bYA6UIjNzAJRm8nefcBL7/nYLjtk6CXnIX/SGqQalXN/6rO7kN8C+XCMfNKtotBzOqQJ0ewaaY3SQezVKOg12EtwsDaPJT/x4YsD+NW0d5iq9ySczvSroTrN6V7Umw/6OrrgxBdTwBBA97dj0/sANygHfud6EI2qs7ky1Oyaw5sa4UxpW1avS+hBrDvHdxDjGtu2UVI0UvRXyDYzCmLGHlhRp3GM7Fzg/pkSoMD75rwaTeGB5ZvQd3crkMLEVI65Jtyo9kQ8ZxfRD1oVNMc/YL0mFJcLcqAhIh+3CyQuLnri9Sz3EScUAJenJeDLnmUp4oNA8PBJUwEC4WKxImqUwagcKRDemtnQUTiGJVulhTZ1IvSd1w5IO9+6UUo2gcNo0dw92P1J1Wog6dBgQ81gnFPPR2OQKL755rXtfkI12D+XmoF2tqyYGiKmdZMjsbrSFWBNJDs6h10DFx8/7xc1nazuzvzYPHxqu6jAUO2EY4wa5Dc+xtfFlkfV2zRIXacNbFbuTVyRx7DB560HCF+3RQwsSvYGTx5S/xwi1qZG7gMKLD7w39FS8wwRDYDiEVqKCEEouxI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Enable handle_userfault to operate under VMA lock by releasing VMA lock instead of mmap_lock and retrying. Note that FAULT_FLAG_RETRY_NOWAIT should never be used when handling faults under per-VMA lock protection because that would break the assumption that lock is dropped on retry. Signed-off-by: Suren Baghdasaryan Acked-by: Peter Xu --- fs/userfaultfd.c | 34 ++++++++++++++-------------------- include/linux/mm.h | 26 ++++++++++++++++++++++++++ mm/memory.c | 20 +++++++++++--------- 3 files changed, 51 insertions(+), 29 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 4e800bb7d2ab..9d61e3e7da7b 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -277,17 +277,16 @@ static inline struct uffd_msg userfault_msg(unsigned long address, * hugepmd ranges. */ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, - struct vm_area_struct *vma, - unsigned long address, - unsigned long flags, - unsigned long reason) + struct vm_fault *vmf, + unsigned long reason) { + struct vm_area_struct *vma = vmf->vma; pte_t *ptep, pte; bool ret = true; - mmap_assert_locked(ctx->mm); + assert_fault_locked(vmf); - ptep = hugetlb_walk(vma, address, vma_mmu_pagesize(vma)); + ptep = hugetlb_walk(vma, vmf->address, vma_mmu_pagesize(vma)); if (!ptep) goto out; @@ -308,10 +307,8 @@ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, } #else static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, - struct vm_area_struct *vma, - unsigned long address, - unsigned long flags, - unsigned long reason) + struct vm_fault *vmf, + unsigned long reason) { return false; /* should never get here */ } @@ -325,11 +322,11 @@ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, * threads. */ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx, - unsigned long address, - unsigned long flags, + struct vm_fault *vmf, unsigned long reason) { struct mm_struct *mm = ctx->mm; + unsigned long address = vmf->address; pgd_t *pgd; p4d_t *p4d; pud_t *pud; @@ -337,7 +334,7 @@ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx, pte_t *pte; bool ret = true; - mmap_assert_locked(mm); + assert_fault_locked(vmf); pgd = pgd_offset(mm, address); if (!pgd_present(*pgd)) @@ -445,7 +442,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason) * Coredumping runs without mmap_lock so we can only check that * the mmap_lock is held, if PF_DUMPCORE was not set. */ - mmap_assert_locked(mm); + assert_fault_locked(vmf); ctx = vma->vm_userfaultfd_ctx.ctx; if (!ctx) @@ -561,15 +558,12 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason) spin_unlock_irq(&ctx->fault_pending_wqh.lock); if (!is_vm_hugetlb_page(vma)) - must_wait = userfaultfd_must_wait(ctx, vmf->address, vmf->flags, - reason); + must_wait = userfaultfd_must_wait(ctx, vmf, reason); else - must_wait = userfaultfd_huge_must_wait(ctx, vma, - vmf->address, - vmf->flags, reason); + must_wait = userfaultfd_huge_must_wait(ctx, vmf, reason); if (is_vm_hugetlb_page(vma)) hugetlb_vma_unlock_read(vma); - mmap_read_unlock(mm); + release_fault_lock(vmf); if (likely(must_wait && !READ_ONCE(ctx->released))) { wake_up_poll(&ctx->fd_wqh, EPOLLIN); diff --git a/include/linux/mm.h b/include/linux/mm.h index bbaec479bf98..cd5389338def 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -705,6 +705,17 @@ static inline bool vma_try_start_write(struct vm_area_struct *vma) return true; } +static inline void vma_assert_locked(struct vm_area_struct *vma) +{ + int mm_lock_seq; + + if (__is_vma_write_locked(vma, &mm_lock_seq)) + return; + + lockdep_assert_held(&vma->vm_lock->lock); + VM_BUG_ON_VMA(!rwsem_is_locked(&vma->vm_lock->lock), vma); +} + static inline void vma_assert_write_locked(struct vm_area_struct *vma) { int mm_lock_seq; @@ -731,6 +742,15 @@ static inline void release_fault_lock(struct vm_fault *vmf) mmap_read_unlock(vmf->vma->vm_mm); } +static inline +void assert_fault_locked(struct vm_fault *vmf) +{ + if (vmf->flags & FAULT_FLAG_VMA_LOCK) + vma_assert_locked(vmf->vma); + else + mmap_assert_locked(vmf->vma->vm_mm); +} + #else /* CONFIG_PER_VMA_LOCK */ static inline void vma_init_lock(struct vm_area_struct *vma) {} @@ -749,6 +769,12 @@ static inline void release_fault_lock(struct vm_fault *vmf) mmap_read_unlock(vmf->vma->vm_mm); } +static inline +void assert_fault_locked(struct vm_fault *vmf) +{ + mmap_assert_locked(vmf->vma->vm_mm); +} + #endif /* CONFIG_PER_VMA_LOCK */ /* diff --git a/mm/memory.c b/mm/memory.c index 4fb8ecfc6d13..672f7383a622 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5202,6 +5202,17 @@ static vm_fault_t sanitize_fault_flags(struct vm_area_struct *vma, !is_cow_mapping(vma->vm_flags))) return VM_FAULT_SIGSEGV; } +#ifdef CONFIG_PER_VMA_LOCK + /* + * Per-VMA locks can't be used with FAULT_FLAG_RETRY_NOWAIT because of + * the assumption that lock is dropped on VM_FAULT_RETRY. + */ + if (WARN_ON_ONCE((*flags & + (FAULT_FLAG_VMA_LOCK | FAULT_FLAG_RETRY_NOWAIT)) == + (FAULT_FLAG_VMA_LOCK | FAULT_FLAG_RETRY_NOWAIT))) + return VM_FAULT_SIGSEGV; +#endif + return 0; } @@ -5294,15 +5305,6 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, if (!vma_start_read(vma)) goto inval; - /* - * Due to the possibility of userfault handler dropping mmap_lock, avoid - * it for now and fall back to page fault handling under mmap_lock. - */ - if (userfaultfd_armed(vma)) { - vma_end_read(vma); - goto inval; - } - /* Check since vm_start/vm_end might change before we lock the VMA */ if (unlikely(address < vma->vm_start || address >= vma->vm_end)) { vma_end_read(vma);