From patchwork Thu Feb 8 21:22:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lokesh Gidra X-Patchwork-Id: 13550494 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71EB5C48260 for ; Thu, 8 Feb 2024 21:22:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 22A696B0083; Thu, 8 Feb 2024 16:22:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1A3E96B0087; Thu, 8 Feb 2024 16:22:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E326D6B0085; Thu, 8 Feb 2024 16:22:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id CD3D76B0082 for ; Thu, 8 Feb 2024 16:22:22 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 9F8C0A25FD for ; Thu, 8 Feb 2024 21:22:22 +0000 (UTC) X-FDA: 81769910124.26.D2F02B7 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf16.hostedemail.com (Postfix) with ESMTP id DC510180012 for ; Thu, 8 Feb 2024 21:22:20 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Y595tjaP; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of 3C0bFZQsKCOANQMGUJIKFTCIQQING.EQONKPWZ-OOMXCEM.QTI@flex--lokeshgidra.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3C0bFZQsKCOANQMGUJIKFTCIQQING.EQONKPWZ-OOMXCEM.QTI@flex--lokeshgidra.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707427340; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=t75SKxYdfQlCW6bpie1fAW4ju+PGfumrxcdqts4g8gw=; b=8okWU1Ph+Cc264HgIhrACR9KMl9nQGnrvbPXOuiIqo2gfz4BTAOEMl+Wje18Q4I9xTBJq9 qnNyNewcnQZXJFnk2rU1u3QVHnZb6AYTwi/qRRM2+cpo4rJgQkqXfIC2QsaLsKZnCfh80p 81sBXgIJ4WUq8cQdL2Ew+NZEZ+DmQa0= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Y595tjaP; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of 3C0bFZQsKCOANQMGUJIKFTCIQQING.EQONKPWZ-OOMXCEM.QTI@flex--lokeshgidra.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3C0bFZQsKCOANQMGUJIKFTCIQQING.EQONKPWZ-OOMXCEM.QTI@flex--lokeshgidra.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707427340; a=rsa-sha256; cv=none; b=mHbgCcjx4zMKR4IVa7AeUOzqQST0OZNsnCt1M8G5kns4+Pqg5pVi5kAzvXLAd0k6rv0VFS 8BRg4BBoOpus3kyJEGgB+1gFXnRut8UCaPUx0s9jf98os9iwthEUR2U5V2GCUnJFF+X5sV t/UYapGW6ENUBD8O9sRIdBZLZ8bxFk8= Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6047fed0132so5697787b3.1 for ; Thu, 08 Feb 2024 13:22:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707427340; x=1708032140; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=t75SKxYdfQlCW6bpie1fAW4ju+PGfumrxcdqts4g8gw=; b=Y595tjaP2VbaI87gwEIPtBsqXYFNNSZ1fwW5P7lUd1fOGp1ywVsetx7d6M1XVVwtho nAxCQWPM3ka8ZkeP64f88BWcSqEQWFJjJsCoUy7su2qXLbKWaikl6RA3EfnaBwDTRwYc kHd3YnV7KWJPJ7uJ62iWpD/mwbMxHp4BsxEHQxU1m99jPFhhr71WjUStcN9hovWdXLIh 97DJ0VlkjsZMjzTGnQ+UsQS5OgKDy/r9OXADykwPJ9nmlhyBg99zCXWvLqvToprOH/nR 643DUvq0nHManym8gvF01AAKOptFek8PBmn8mKPKvb3nu7iD8wQP2uqsL11HILWAXEgK sKiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707427340; x=1708032140; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=t75SKxYdfQlCW6bpie1fAW4ju+PGfumrxcdqts4g8gw=; b=TYJ8wdPSBIme4nzmF1OLSWI49O17N9yjl3XNbK4nOEzaP7jd3DiJwvzO2wsbKWx1T7 vAmkX1+y3kCSv3iUliQ/pQptHzygXBCorHFD9CNidExK84kqiR0CbNFkPevbV7wgJ/KN NMC/DRnNbfGAw7gr3yHFo69bEqKqR9PYtbtRFyF/kD8jl1BZK3WcI/Q18rdJvosXrHYT wpcBqxh1jMK9d3JREtuXxhc6XWeIQW3aT0N4Nl0tL8QzHlAvOS4oO+awRokF+vrh3jA5 33IMMa8JN4rlOG3Ambxeuqg0oacCOGLWPnlVfvoHjIK1pB2zGzH9wxWBSrxKJ232pF+J AI/Q== X-Gm-Message-State: AOJu0Yzyltm0Xge5/Rf/YVEHap593KDqqkCUvUnmqq8HndWpsBrF7WEv HQMlvPBn0D8Jf3HJ9/UrjrgY/tZPvxTwjuVG04p+SUGr4JjSuft+/lGlAMgv3lYIsvSnfu2k0Qt NQroAz1piPGR3XMLBV3qYtg== X-Google-Smtp-Source: AGHT+IFAIVJj1WYfDvCAzIltMCiiv0Q1ue38dCbij931BC4Jbgs1PrxtKM7Z6HsgiVUMnFVeZrVZNb8xsxAHJCEJNQ== X-Received: from lg.mtv.corp.google.com ([2620:15c:211:202:e9ba:42a8:6aba:f5d5]) (user=lokeshgidra job=sendgmr) by 2002:a81:49c9:0:b0:5e8:bea4:4d3b with SMTP id w192-20020a8149c9000000b005e8bea44d3bmr93963ywa.6.1707427339968; Thu, 08 Feb 2024 13:22:19 -0800 (PST) Date: Thu, 8 Feb 2024 13:22:04 -0800 In-Reply-To: <20240208212204.2043140-1-lokeshgidra@google.com> Mime-Version: 1.0 References: <20240208212204.2043140-1-lokeshgidra@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240208212204.2043140-4-lokeshgidra@google.com> Subject: [PATCH v4 3/3] userfaultfd: use per-vma locks in userfaultfd operations From: Lokesh Gidra To: akpm@linux-foundation.org Cc: lokeshgidra@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, selinux@vger.kernel.org, surenb@google.com, kernel-team@android.com, aarcange@redhat.com, peterx@redhat.com, david@redhat.com, axelrasmussen@google.com, bgeffon@google.com, willy@infradead.org, jannh@google.com, kaleshsingh@google.com, ngeoffray@google.com, timmurray@google.com, rppt@kernel.org, Liam.Howlett@oracle.com X-Rspamd-Queue-Id: DC510180012 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 9ijmh3naoq7wbxgi9efed3chc5jksx69 X-HE-Tag: 1707427340-731421 X-HE-Meta: U2FsdGVkX1+HCOqrREVGUGuYy3bKvh7CM5FRPhWZToLATRV17ZJ8fr5cdCWXATKbUj9PPsKhnzhC6wm9sIBUXxshkLSB0gX6GXZh6S9rvyIg5BJltIH0xOJ81FwwE2gEMAWf0u5XN8Y0+1MlOlFmL41kZJsgoIL6e3rIfstXL1TASR1kIPRakgd/2SknGCOodECuZ00s2utswMvKOsIpU2u5janNiRr7u81+UXyL/75x73DazFN5nN8XeoRw7DGaYr5qiQGkmsY07vmwlnBvhwCUeGFTEpO0x9xjiJa/zCPmK4kWXdnzh5DfdxOyV2nbdJpXkwRTCqglKITpDzU4Te/PE1q5Gs5JM8IEwuzEllGO8WgHBEuMeZeaOZmHr3VxC4hfthSg2Vx7m8aPpcjiyb35+Ae2RIEHiaRxRwL2RNUZB8yxyZVIqDB7K2wej5PAuoQYwScQtmQjkhggPJz6B6l7oj5p9CEchQ/ABDwhzSaLsq9/AgtUtmlYH9Smxq0iPu6V44G+mtQYOSqNF1ooh30m7y3Kk66MLTm6JIIceZIkPAQ762Fa8ncFE7KlJcHAPOCkhigV/9HGNZMDetQrUx8v2CKRw9q09gB7j8mcpRZHHOHjP74KLlF94RjXMWvAoPjpZIiQuvRLYVu7RRHii05p4m0jmPv1dLCkUemw2DztX8irbwj2+ledRc+GnCp2zaFQBGJhWa3oyC8Gr/pMI4w1Q4Ss+ub8foX1Ab0DyIvwXx1WMXbhXsHUxS5/kXc7kd/Hngs0XpBHOWvlRE/9aaR2atLgyZ9/5sKRFu4IsfcidfQVLMTvyXpdxyq+JbXs/Gv4Xh/h8mtwjEk7zIYcXtbKbwzimazawIeUzplsPEp1nUYqbkoZme/G0b6Y8ToAl6xZk/E3t0/+LNfBk72DiRPWb/pC/Wfes3Q8+WoSLLKr13kHt+HFPvqq4ENoxoFKmlTVQzE+z3xSqFG8YuH p8TRFtDN V9nb4XEwF2uShitRsl8ZDFP1xONk1l3as1Pj7l9t9dhuTOKNSDiLwhUhFEn/AUapQ/PIBhWfjehu7vxObJPdLLfJPb7iZul8myUmf7vUxN+QS/Yg6c70KtW7MB6ZG9T9QHsAN8UQP6V+nuYWCAdDS8SC4MaDm8WwXfgGlyCnMZYqwl+FfRy8RwsExfZbq4bLFqsAmFxrAr2HmS1ry7J2nD2bKNtSycjVqzPzlxqJNThiwsCI56SfiKLnKFTNV5LSAlp5xDbBKNVHv+RXJppqGBmBpS5qLGGFAEOQNDpU7N6Vm+YqvGCL39w7MHppJdTYfYgtCh4NrWudWPb+xx0ejpt7S23H1h5ehux2LSVVEVxLP37HYPRlk5z6GqDEClPlWXsDHUD6GsbSq7k3VhEJSUqZ8cOaMs9GQRksSO1tT8zro6gUdqgLx2VGFWUFXYTXMqXwhkDnC0MhG5KypxQgrnggNQj0UHukAuXz8KgsnYYyb6aXIxtJax8iwVRSweFY8UrKXQyEiwq2fNfnOjE3sXiiTZxTeGewlvZ399/RPgcAVFYRiOtWiqq/vezhgIE7GeVwGeC6KO9ZAyw3uaz5D5qIaJWa4VPZNpZIzzQNzmvAjSkAmQ49wIP76jFYROHoFc+vpFgYqJk90CxOslSLJN/cdkQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: All userfaultfd operations, except write-protect, opportunistically use per-vma locks to lock vmas. On failure, attempt again inside mmap_lock critical section. Write-protect operation requires mmap_lock as it iterates over multiple vmas. Signed-off-by: Lokesh Gidra --- fs/userfaultfd.c | 13 +- include/linux/userfaultfd_k.h | 5 +- mm/userfaultfd.c | 356 ++++++++++++++++++++++++++-------- 3 files changed, 275 insertions(+), 99 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index c00a021bcce4..60dcfafdc11a 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -2005,17 +2005,8 @@ static int userfaultfd_move(struct userfaultfd_ctx *ctx, return -EINVAL; if (mmget_not_zero(mm)) { - mmap_read_lock(mm); - - /* Re-check after taking map_changing_lock */ - down_read(&ctx->map_changing_lock); - if (likely(!atomic_read(&ctx->mmap_changing))) - ret = move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src, - uffdio_move.len, uffdio_move.mode); - else - ret = -EAGAIN; - up_read(&ctx->map_changing_lock); - mmap_read_unlock(mm); + ret = move_pages(ctx, uffdio_move.dst, uffdio_move.src, + uffdio_move.len, uffdio_move.mode); mmput(mm); } else { return -ESRCH; diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 3210c3552976..05d59f74fc88 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -138,9 +138,8 @@ extern long uffd_wp_range(struct vm_area_struct *vma, /* move_pages */ void double_pt_lock(spinlock_t *ptl1, spinlock_t *ptl2); void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2); -ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, - unsigned long dst_start, unsigned long src_start, - unsigned long len, __u64 flags); +ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, + unsigned long src_start, unsigned long len, __u64 flags); int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval, struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 74aad0831e40..1e25768b2136 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -19,20 +19,12 @@ #include #include "internal.h" -static __always_inline -struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, - unsigned long dst_start, - unsigned long len) +static bool validate_dst_vma(struct vm_area_struct *dst_vma, + unsigned long dst_end) { - /* - * Make sure that the dst range is both valid and fully within a - * single existing vma. - */ - struct vm_area_struct *dst_vma; - - dst_vma = find_vma(dst_mm, dst_start); - if (!range_in_vma(dst_vma, dst_start, dst_start + len)) - return NULL; + /* Make sure that the dst range is fully within dst_vma. */ + if (dst_end > dst_vma->vm_end) + return false; /* * Check the vma is registered in uffd, this is required to @@ -40,11 +32,125 @@ struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, * time. */ if (!dst_vma->vm_userfaultfd_ctx.ctx) - return NULL; + return false; + + return true; +} + +#ifdef CONFIG_PER_VMA_LOCK +/* + * lock_vma() - Lookup and lock vma corresponding to @address. + * @mm: mm to search vma in. + * @address: address that the vma should contain. + * @prepare_anon: If true, then prepare the vma (if private) with anon_vma. + * + * Should be called without holding mmap_lock. vma should be unlocked after use + * with unlock_vma(). + * + * Return: A locked vma containing @address, NULL if no vma is found, or + * -ENOMEM if anon_vma couldn't be allocated. + */ +static struct vm_area_struct *lock_vma(struct mm_struct *mm, + unsigned long address, + bool prepare_anon) +{ + struct vm_area_struct *vma; + + vma = lock_vma_under_rcu(mm, address); + if (vma) { + /* + * lock_vma_under_rcu() only checks anon_vma for private + * anonymous mappings. But we need to ensure it is assigned in + * private file-backed vmas as well. + */ + if (prepare_anon && !(vma->vm_flags & VM_SHARED) && + !vma->anon_vma) + vma_end_read(vma); + else + return vma; + } + + mmap_read_lock(mm); + vma = vma_lookup(mm, address); + if (vma) { + if (prepare_anon && !(vma->vm_flags & VM_SHARED) && + anon_vma_prepare(vma)) { + vma = ERR_PTR(-ENOMEM); + } else { + /* + * We cannot use vma_start_read() as it may fail due to + * false locked (see comment in vma_start_read()). We + * can avoid that by directly locking vm_lock under + * mmap_lock, which guarantees that nobody can lock the + * vma for write (vma_start_write()) under us. + */ + down_read(&vma->vm_lock->lock); + } + } + + mmap_read_unlock(mm); + return vma; +} + +static void unlock_vma(struct vm_area_struct *vma) +{ + vma_end_read(vma); +} + +static struct vm_area_struct *find_and_lock_dst_vma(struct mm_struct *dst_mm, + unsigned long dst_start, + unsigned long len) +{ + struct vm_area_struct *dst_vma; + + /* Ensure anon_vma is assigned for private vmas */ + dst_vma = lock_vma(dst_mm, dst_start, true); + + if (!dst_vma) + return ERR_PTR(-ENOENT); + + if (PTR_ERR(dst_vma) == -ENOMEM) + return dst_vma; + + if (!validate_dst_vma(dst_vma, dst_start + len)) + goto out_unlock; return dst_vma; +out_unlock: + unlock_vma(dst_vma); + return ERR_PTR(-ENOENT); } +#else + +static struct vm_area_struct *lock_mm_and_find_dst_vma(struct mm_struct *dst_mm, + unsigned long dst_start, + unsigned long len) +{ + struct vm_area_struct *dst_vma; + int err = -ENOENT; + + mmap_read_lock(dst_mm); + dst_vma = vma_lookup(dst_mm, dst_start); + if (!dst_vma) + goto out_unlock; + + /* Ensure anon_vma is assigned for private vmas */ + if (!(dst_vma->vm_flags & VM_SHARED) && anon_vma_prepare(dst_vma)) { + err = -ENOMEM; + goto out_unlock; + } + + if (!validate_dst_vma(dst_vma, dst_start + len)) + goto out_unlock; + + return dst_vma; +out_unlock: + mmap_read_unlock(dst_mm); + return ERR_PTR(err); +} +#endif + /* Check if dst_addr is outside of file's size. Must be called with ptl held. */ static bool mfill_file_over_size(struct vm_area_struct *dst_vma, unsigned long dst_addr) @@ -350,7 +456,8 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) #ifdef CONFIG_HUGETLB_PAGE /* * mfill_atomic processing for HUGETLB vmas. Note that this routine is - * called with mmap_lock held, it will release mmap_lock before returning. + * called with either vma-lock or mmap_lock held, it will release the lock + * before returning. */ static __always_inline ssize_t mfill_atomic_hugetlb( struct userfaultfd_ctx *ctx, @@ -361,7 +468,6 @@ static __always_inline ssize_t mfill_atomic_hugetlb( uffd_flags_t flags) { struct mm_struct *dst_mm = dst_vma->vm_mm; - int vm_shared = dst_vma->vm_flags & VM_SHARED; ssize_t err; pte_t *dst_pte; unsigned long src_addr, dst_addr; @@ -380,7 +486,11 @@ static __always_inline ssize_t mfill_atomic_hugetlb( */ if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { up_read(&ctx->map_changing_lock); +#ifdef CONFIG_PER_VMA_LOCK + unlock_vma(dst_vma); +#else mmap_read_unlock(dst_mm); +#endif return -EINVAL; } @@ -403,24 +513,32 @@ static __always_inline ssize_t mfill_atomic_hugetlb( * retry, dst_vma will be set to NULL and we must lookup again. */ if (!dst_vma) { +#ifdef CONFIG_PER_VMA_LOCK + dst_vma = find_and_lock_dst_vma(dst_mm, dst_start, len); +#else + dst_vma = lock_mm_and_find_dst_vma(dst_mm, dst_start, len); +#endif + if (IS_ERR(dst_vma)) { + err = PTR_ERR(dst_vma); + goto out; + } + err = -ENOENT; - dst_vma = find_dst_vma(dst_mm, dst_start, len); - if (!dst_vma || !is_vm_hugetlb_page(dst_vma)) - goto out_unlock; + if (!is_vm_hugetlb_page(dst_vma)) + goto out_unlock_vma; err = -EINVAL; if (vma_hpagesize != vma_kernel_pagesize(dst_vma)) - goto out_unlock; - - vm_shared = dst_vma->vm_flags & VM_SHARED; - } + goto out_unlock_vma; - /* - * If not shared, ensure the dst_vma has a anon_vma. - */ - err = -ENOMEM; - if (!vm_shared) { - if (unlikely(anon_vma_prepare(dst_vma))) + /* + * If memory mappings are changing because of non-cooperative + * operation (e.g. mremap) running in parallel, bail out and + * request the user to retry later + */ + down_read(&ctx->map_changing_lock); + err = -EAGAIN; + if (atomic_read(&ctx->mmap_changing)) goto out_unlock; } @@ -465,7 +583,11 @@ static __always_inline ssize_t mfill_atomic_hugetlb( if (unlikely(err == -ENOENT)) { up_read(&ctx->map_changing_lock); +#ifdef CONFIG_PER_VMA_LOCK + unlock_vma(dst_vma); +#else mmap_read_unlock(dst_mm); +#endif BUG_ON(!folio); err = copy_folio_from_user(folio, @@ -474,17 +596,6 @@ static __always_inline ssize_t mfill_atomic_hugetlb( err = -EFAULT; goto out; } - mmap_read_lock(dst_mm); - down_read(&ctx->map_changing_lock); - /* - * If memory mappings are changing because of non-cooperative - * operation (e.g. mremap) running in parallel, bail out and - * request the user to retry later - */ - if (atomic_read(&ctx->mmap_changing)) { - err = -EAGAIN; - break; - } dst_vma = NULL; goto retry; @@ -505,7 +616,12 @@ static __always_inline ssize_t mfill_atomic_hugetlb( out_unlock: up_read(&ctx->map_changing_lock); +out_unlock_vma: +#ifdef CONFIG_PER_VMA_LOCK + unlock_vma(dst_vma); +#else mmap_read_unlock(dst_mm); +#endif out: if (folio) folio_put(folio); @@ -597,7 +713,19 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, copied = 0; folio = NULL; retry: - mmap_read_lock(dst_mm); + /* + * Make sure the vma is not shared, that the dst range is + * both valid and fully within a single existing vma. + */ +#ifdef CONFIG_PER_VMA_LOCK + dst_vma = find_and_lock_dst_vma(dst_mm, dst_start, len); +#else + dst_vma = lock_mm_and_find_dst_vma(dst_mm, dst_start, len); +#endif + if (IS_ERR(dst_vma)) { + err = PTR_ERR(dst_vma); + goto out; + } /* * If memory mappings are changing because of non-cooperative @@ -609,15 +737,6 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, if (atomic_read(&ctx->mmap_changing)) goto out_unlock; - /* - * Make sure the vma is not shared, that the dst range is - * both valid and fully within a single existing vma. - */ - err = -ENOENT; - dst_vma = find_dst_vma(dst_mm, dst_start, len); - if (!dst_vma) - goto out_unlock; - err = -EINVAL; /* * shmem_zero_setup is invoked in mmap for MAP_ANONYMOUS|MAP_SHARED but @@ -647,16 +766,6 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE)) goto out_unlock; - /* - * Ensure the dst_vma has a anon_vma or this page - * would get a NULL anon_vma when moved in the - * dst_vma. - */ - err = -ENOMEM; - if (!(dst_vma->vm_flags & VM_SHARED) && - unlikely(anon_vma_prepare(dst_vma))) - goto out_unlock; - while (src_addr < src_start + len) { pmd_t dst_pmdval; @@ -699,7 +808,11 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, void *kaddr; up_read(&ctx->map_changing_lock); +#ifdef CONFIG_PER_VMA_LOCK + unlock_vma(dst_vma); +#else mmap_read_unlock(dst_mm); +#endif BUG_ON(!folio); kaddr = kmap_local_folio(folio, 0); @@ -730,7 +843,11 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, out_unlock: up_read(&ctx->map_changing_lock); +#ifdef CONFIG_PER_VMA_LOCK + unlock_vma(dst_vma); +#else mmap_read_unlock(dst_mm); +#endif out: if (folio) folio_put(folio); @@ -1267,16 +1384,67 @@ static int validate_move_areas(struct userfaultfd_ctx *ctx, if (!vma_is_anonymous(src_vma) || !vma_is_anonymous(dst_vma)) return -EINVAL; - /* - * Ensure the dst_vma has a anon_vma or this page - * would get a NULL anon_vma when moved in the - * dst_vma. - */ - if (unlikely(anon_vma_prepare(dst_vma))) - return -ENOMEM; + return 0; +} + +#ifdef CONFIG_PER_VMA_LOCK +static int find_and_lock_vmas(struct mm_struct *mm, + unsigned long dst_start, + unsigned long src_start, + struct vm_area_struct **dst_vmap, + struct vm_area_struct **src_vmap) +{ + int err; + + /* There is no need to prepare anon_vma for src_vma */ + *src_vmap = lock_vma(mm, src_start, false); + if (!*src_vmap) + return -ENOENT; + + /* Ensure anon_vma is assigned for anonymous vma */ + *dst_vmap = lock_vma(mm, dst_start, true); + err = -ENOENT; + if (!*dst_vmap) + goto out_unlock; + + err = -ENOMEM; + if (PTR_ERR(*dst_vmap) == -ENOMEM) + goto out_unlock; return 0; +out_unlock: + unlock_vma(*src_vmap); + return err; } +#else +static int lock_mm_and_find_vmas(struct mm_struct *mm, + unsigned long dst_start, + unsigned long src_start, + struct vm_area_struct **dst_vmap, + struct vm_area_struct **src_vmap) +{ + int err = -ENOENT; + mmap_read_lock(mm); + + *src_vmap = vma_lookup(mm, src_start); + if (!*src_vmap) + goto out_unlock; + + *dst_vmap = vma_lookup(mm, dst_start); + if (!*dst_vmap) + goto out_unlock; + + /* Ensure anon_vma is assigned */ + err = -ENOMEM; + if (vma_is_anonymous(*dst_vmap) && anon_vma_prepare(*dst_vmap)) + goto out_unlock; + + return 0; +out_unlock: + mmap_read_unlock(mm); + return err; +} +#endif /** * move_pages - move arbitrary anonymous pages of an existing vma @@ -1287,8 +1455,6 @@ static int validate_move_areas(struct userfaultfd_ctx *ctx, * @len: length of the virtual memory range * @mode: flags from uffdio_move.mode * - * Must be called with mmap_lock held for read. - * * move_pages() remaps arbitrary anonymous pages atomically in zero * copy. It only works on non shared anonymous pages because those can * be relocated without generating non linear anon_vmas in the rmap @@ -1355,10 +1521,10 @@ static int validate_move_areas(struct userfaultfd_ctx *ctx, * could be obtained. This is the only additional complexity added to * the rmap code to provide this anonymous page remapping functionality. */ -ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, - unsigned long dst_start, unsigned long src_start, - unsigned long len, __u64 mode) +ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, + unsigned long src_start, unsigned long len, __u64 mode) { + struct mm_struct *mm = ctx->mm; struct vm_area_struct *src_vma, *dst_vma; unsigned long src_addr, dst_addr; pmd_t *src_pmd, *dst_pmd; @@ -1376,28 +1542,40 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, WARN_ON_ONCE(dst_start + len <= dst_start)) goto out; +#ifdef CONFIG_PER_VMA_LOCK + err = find_and_lock_vmas(mm, dst_start, src_start, + &dst_vma, &src_vma); +#else + err = lock_mm_and_find_vmas(mm, dst_start, src_start, + &dst_vma, &src_vma); +#endif + if (err) + goto out; + + /* Re-check after taking map_changing_lock */ + down_read(&ctx->map_changing_lock); + if (likely(atomic_read(&ctx->mmap_changing))) { + err = -EAGAIN; + goto out_unlock; + } /* * Make sure the vma is not shared, that the src and dst remap * ranges are both valid and fully within a single existing * vma. */ - src_vma = find_vma(mm, src_start); - if (!src_vma || (src_vma->vm_flags & VM_SHARED)) - goto out; - if (src_start < src_vma->vm_start || - src_start + len > src_vma->vm_end) - goto out; + if (src_vma->vm_flags & VM_SHARED) + goto out_unlock; + if (src_start + len > src_vma->vm_end) + goto out_unlock; - dst_vma = find_vma(mm, dst_start); - if (!dst_vma || (dst_vma->vm_flags & VM_SHARED)) - goto out; - if (dst_start < dst_vma->vm_start || - dst_start + len > dst_vma->vm_end) - goto out; + if (dst_vma->vm_flags & VM_SHARED) + goto out_unlock; + if (dst_start + len > dst_vma->vm_end) + goto out_unlock; err = validate_move_areas(ctx, src_vma, dst_vma); if (err) - goto out; + goto out_unlock; for (src_addr = src_start, dst_addr = dst_start; src_addr < src_start + len;) { @@ -1514,6 +1692,14 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, moved += step_size; } +out_unlock: + up_read(&ctx->map_changing_lock); +#ifdef CONFIG_PER_VMA_LOCK + unlock_vma(dst_vma); + unlock_vma(src_vma); +#else + mmap_read_unlock(mm); +#endif out: VM_WARN_ON(moved < 0); VM_WARN_ON(err > 0);