From patchwork Tue Feb 6 01:09:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lokesh Gidra X-Patchwork-Id: 13546498 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F07FC48260 for ; Tue, 6 Feb 2024 01:09:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 11BB66B007B; Mon, 5 Feb 2024 20:09:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0A4D36B007D; Mon, 5 Feb 2024 20:09:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E854C6B007E; Mon, 5 Feb 2024 20:09:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D7AE96B007B for ; Mon, 5 Feb 2024 20:09:54 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 817B380268 for ; Tue, 6 Feb 2024 01:09:54 +0000 (UTC) X-FDA: 81759597108.17.CD921DB Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf25.hostedemail.com (Postfix) with ESMTP id E5695A0011 for ; Tue, 6 Feb 2024 01:09:52 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=uVj5zpUY; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf25.hostedemail.com: domain of 34IbBZQsKCCkQTPJXMLNIWFLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--lokeshgidra.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=34IbBZQsKCCkQTPJXMLNIWFLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--lokeshgidra.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707181793; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=I/Vi8otOtymcXqLdpslOuIDSttexTh2OJvYCdLb4Mo0=; b=0qVu0cQK+xCjGd3VK1ttB6JxODG/gGxiV5xQ49qR2Kt3qXXyGuAs+Di43ljTsJoInxTJ7E vapJeZmIA9rVucs4ClG0YaYquqs6NKxrq0tUdHVzVRkDrxZw0Lq/6WpP5vjNX28QZILQ/V gOliY5mXPlKK0r15NssrTMzPSwdtbew= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=uVj5zpUY; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf25.hostedemail.com: domain of 34IbBZQsKCCkQTPJXMLNIWFLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--lokeshgidra.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=34IbBZQsKCCkQTPJXMLNIWFLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--lokeshgidra.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707181793; a=rsa-sha256; cv=none; b=snYq/ov4UP/s+UhDROta8/skwtI3AMAszFkzCT4+av8bDxY/R08evDKKG9/NXWZAD5iOaF 64TcvoSAosHkskDBszRJisG/BCXMJkcDqPd8spPqc7LvSC2tD18acNAriXgcEMkyKQy3kD BRCnixR+h1dLjy+hRfoH+vbimS6mySE= Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dc64e0fc7c8so505431276.2 for ; Mon, 05 Feb 2024 17:09:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707181792; x=1707786592; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=I/Vi8otOtymcXqLdpslOuIDSttexTh2OJvYCdLb4Mo0=; b=uVj5zpUYNeO0TF9E31S6GoD27sYNqsCgV+95Yj6F7zgmxQhSRtQGaegrXrdoMIbYZH vlvxxScv9CQgQfzR/k2xRA8WaUD3FKUv/+bmKBZrrHjN5OGCALscTcx7hjJgVpRgls7d 9CIEskKX4gLMgDVqJ2yejczGEVAF2Pn/qb0VSQ5YBZFE3ZDZVINd/7obdm9NUFZxtKq/ j35BDvfKymSpi8IIg1sGPyHPU8zOeAO3sbVo8KSrTgjs6LZ7Wql2Bft6z2E51pL8w9U8 P/AGra2vcRSfVk5+QuekkGniYwBe3vRemIuAPeU5KMx0+GWze5RNwOqsekJ9kTH6UrRu RwAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707181792; x=1707786592; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=I/Vi8otOtymcXqLdpslOuIDSttexTh2OJvYCdLb4Mo0=; b=ticfL3KvO9dRnZonGHw9IwTXfWLgBbtcQVNFjikCQGuU4FERnaQ+EtPD/+cNHriu1M ToWETYAvNcXbPQpFiZBjNFSk/VGeWXl5HGtmqjxW9ZNZ8zoID314G946eeBdbB2R6nYg 6yvxv2fknoc6BHDsVO9k+grtNNtTEYv5hC3znZdlhl6qsuIoybUIRlOqfC1Ea5v85h6G 7WK64YMO+ge0XHtv/N6Ho1vSBrTQ3WQlPgWgvWyv2720xiOuxGfibXeKbMsG1kVW7Pqf r2/dC1qjRAuA6ux+4fNDG1i3aC14dR/GHwWWJmkcTAejqNIVfDMmpfhGANuE4JrWFqoy myjA== X-Gm-Message-State: AOJu0YxmKzFzOUV1MXjQFmatuE4RG3jc1tzhv7MXLj24smh9V0IWzZ+E PMuT3zX7en3+rXYCAJZd8O4EIdOdVpJbcA0chk1M7fgL7AyMp6kp0wOVdkx1ROKM7kPjlRG2m1O H+fA92UTKI/TlJcKGu1YypA== X-Google-Smtp-Source: AGHT+IF6iHrKO4GF2onBN726PZJP9J0wc9faZd8ILxPXSc4wek49Og0zsJ1bKmHXFW1EgIBSnj2zislPCuzCAjX5zw== X-Received: from lg.mtv.corp.google.com ([2620:15c:211:202:713:bb2c:e0e8:becb]) (user=lokeshgidra job=sendgmr) by 2002:a05:6902:144f:b0:dc6:dd74:de68 with SMTP id a15-20020a056902144f00b00dc6dd74de68mr7102ybv.12.1707181792130; Mon, 05 Feb 2024 17:09:52 -0800 (PST) Date: Mon, 5 Feb 2024 17:09:18 -0800 In-Reply-To: <20240206010919.1109005-1-lokeshgidra@google.com> Mime-Version: 1.0 References: <20240206010919.1109005-1-lokeshgidra@google.com> X-Mailer: git-send-email 2.43.0.594.gd9cf4e227d-goog Message-ID: <20240206010919.1109005-3-lokeshgidra@google.com> Subject: [PATCH v3 2/3] userfaultfd: protect mmap_changing with rw_sem in userfaulfd_ctx From: Lokesh Gidra To: akpm@linux-foundation.org Cc: lokeshgidra@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, selinux@vger.kernel.org, surenb@google.com, kernel-team@android.com, aarcange@redhat.com, peterx@redhat.com, david@redhat.com, axelrasmussen@google.com, bgeffon@google.com, willy@infradead.org, jannh@google.com, kaleshsingh@google.com, ngeoffray@google.com, timmurray@google.com, rppt@kernel.org, Liam.Howlett@oracle.com X-Rspamd-Queue-Id: E5695A0011 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: 4udnyw7d4f7wn3xj3y9aotux3dzi8agc X-HE-Tag: 1707181792-275949 X-HE-Meta: U2FsdGVkX1/QohrU195jJv5nUFMCeD9UIb9qftvTWDwD/8YlIdB6yhxb9h3XL4kp2zsDJ/N35FKX1lt3K/hhVms4VSB+vXVsWtXnCzghpa9HUIAVwAEk/GR343IEoCkboAsodms8zBl8Fi95fLUkJOQu+QWXyWO+HYeCbjWJrRykpu9GTFGkwrOW6sKQCXY6YMDkw2QKVO1jofR2bNN5J3YKFGPhNJ2R69mXM+0+KDXMEySsHJjQSgj/9dwXtPg+2ZtECcXmMq5ymS8SXbgd8VtytJfdlIK3dg4/1AJ9LT1GlyLWfM7NfQCwuuxJWQk7EozVLXHjeOWh3kAGxN4m2d1CclPg/jewpaOFYjbx0+cVFU0klxLdEezSOsI33hSZry1xyJKhzXYhwcOxRlzxWBujuzMN8NQc27SJnmFQn6qB+1cqgm2txNwrH8UmXfWtCW8qBeuz905DoXBdhl9CXk8lCrLGhT1DA7+V4Ylh8JZXu9n7k1BzBpqaId6Wgr5Gsvy+9EWRWil61Ej3ZPKj/VF/FkDsWw8cQ2OGj8tlwtEvuqTEmkhaPxOjT+zTgt1NZKG7f9ybc4cxf3ossPAGbN0eBV3ZivYjJeKZCRXtxB/aUBE5lyRWvlJSkKc+AtuFVar4Jx1ymSXawAxF/KwmDrh1RoAsEp7vN0DDMi4skn15Z0yT5f6TYRavtnqOByqZmUVDh3QK+AaW3atvDC1x+yhYwLZdLBK629FrFrzUQ7s5lJPdfGjVTErwSCUNjQ4ONpIGgSYu2tSlCznPSAvyzL2fifneuavhLIWMklVTY7knrQaLz/IEe8hnPorLVoTQ6prAgXEJe2JlPCJZZYAGembhaUOv2f0nHH0/OmafW4SfTXYf8jMNscHW8CspuHYavLfL6CzjRivrklj7H9GwbHpHy61aTbKT5cMmnFvXin1EHNA69rH7a4pjeQz0OiMccQgF0dMAd+EG4+7Z+PW aNpJ990t 97YsMCBxTldAhwtJ/MQRh1oGqzmurywsYHlpwEfX28nMUKn0Fi5hSADa9EcNwfels4PI1oipffvYHF216pmGeSmkJTUrwMFz88XsEvm1iS++dv63XJDRACLz4BTijFFSIq65Xq1PJc/fbu3cCiM/w4ldLXVwN/hTP69qgxnog8emC3iYTYuN/p9tbs1SHpSb/eLd6MguSl+zmTWYuf7JuRTPY2AKMZE0dnyoES7+CIX4kBzX+mFGkovdD2b1m/hwNSeXTy8YXWGeVDqU2gnBUZ+VNzlq4n27UDm6rXpdnPJFs2ydoVlWWD/wckYUYkC2iEt1T/jd8i5EagsVEzN+ZBNhFAhr3IeInRkdFZwTmvssk2vYQrL785r8drMlz2lNzkSBEXA3WoVc0a+qzdwqPDu49cSl2j7vfXHLsL2dOFKO0aQuUgTLV8EQxdA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Increments and loads to mmap_changing are always in mmap_lock critical section. This ensures that if userspace requests event notification for non-cooperative operations (e.g. mremap), userfaultfd operations don't occur concurrently. This can be achieved by using a separate read-write semaphore in userfaultfd_ctx such that increments are done in write-mode and loads in read-mode, thereby eliminating the dependency on mmap_lock for this purpose. This is a preparatory step before we replace mmap_lock usage with per-vma locks in fill/move ioctls. Signed-off-by: Lokesh Gidra Reviewed-by: Mike Rapoport (IBM) --- fs/userfaultfd.c | 40 ++++++++++++---------- include/linux/userfaultfd_k.h | 31 ++++++++++-------- mm/userfaultfd.c | 62 ++++++++++++++++++++--------------- 3 files changed, 75 insertions(+), 58 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 58331b83d648..c00a021bcce4 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -685,12 +685,15 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) ctx->flags = octx->flags; ctx->features = octx->features; ctx->released = false; + init_rwsem(&ctx->map_changing_lock); atomic_set(&ctx->mmap_changing, 0); ctx->mm = vma->vm_mm; mmgrab(ctx->mm); userfaultfd_ctx_get(octx); + down_write(&octx->map_changing_lock); atomic_inc(&octx->mmap_changing); + up_write(&octx->map_changing_lock); fctx->orig = octx; fctx->new = ctx; list_add_tail(&fctx->list, fcs); @@ -737,7 +740,9 @@ void mremap_userfaultfd_prep(struct vm_area_struct *vma, if (ctx->features & UFFD_FEATURE_EVENT_REMAP) { vm_ctx->ctx = ctx; userfaultfd_ctx_get(ctx); + down_write(&ctx->map_changing_lock); atomic_inc(&ctx->mmap_changing); + up_write(&ctx->map_changing_lock); } else { /* Drop uffd context if remap feature not enabled */ vma_start_write(vma); @@ -783,7 +788,9 @@ bool userfaultfd_remove(struct vm_area_struct *vma, return true; userfaultfd_ctx_get(ctx); + down_write(&ctx->map_changing_lock); atomic_inc(&ctx->mmap_changing); + up_write(&ctx->map_changing_lock); mmap_read_unlock(mm); msg_init(&ewq.msg); @@ -825,7 +832,9 @@ int userfaultfd_unmap_prep(struct vm_area_struct *vma, unsigned long start, return -ENOMEM; userfaultfd_ctx_get(ctx); + down_write(&ctx->map_changing_lock); atomic_inc(&ctx->mmap_changing); + up_write(&ctx->map_changing_lock); unmap_ctx->ctx = ctx; unmap_ctx->start = start; unmap_ctx->end = end; @@ -1709,9 +1718,8 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx, if (uffdio_copy.mode & UFFDIO_COPY_MODE_WP) flags |= MFILL_ATOMIC_WP; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_copy(ctx->mm, uffdio_copy.dst, uffdio_copy.src, - uffdio_copy.len, &ctx->mmap_changing, - flags); + ret = mfill_atomic_copy(ctx, uffdio_copy.dst, uffdio_copy.src, + uffdio_copy.len, flags); mmput(ctx->mm); } else { return -ESRCH; @@ -1761,9 +1769,8 @@ static int userfaultfd_zeropage(struct userfaultfd_ctx *ctx, goto out; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_zeropage(ctx->mm, uffdio_zeropage.range.start, - uffdio_zeropage.range.len, - &ctx->mmap_changing); + ret = mfill_atomic_zeropage(ctx, uffdio_zeropage.range.start, + uffdio_zeropage.range.len); mmput(ctx->mm); } else { return -ESRCH; @@ -1818,9 +1825,8 @@ static int userfaultfd_writeprotect(struct userfaultfd_ctx *ctx, return -EINVAL; if (mmget_not_zero(ctx->mm)) { - ret = mwriteprotect_range(ctx->mm, uffdio_wp.range.start, - uffdio_wp.range.len, mode_wp, - &ctx->mmap_changing); + ret = mwriteprotect_range(ctx, uffdio_wp.range.start, + uffdio_wp.range.len, mode_wp); mmput(ctx->mm); } else { return -ESRCH; @@ -1870,9 +1876,8 @@ static int userfaultfd_continue(struct userfaultfd_ctx *ctx, unsigned long arg) flags |= MFILL_ATOMIC_WP; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_continue(ctx->mm, uffdio_continue.range.start, - uffdio_continue.range.len, - &ctx->mmap_changing, flags); + ret = mfill_atomic_continue(ctx, uffdio_continue.range.start, + uffdio_continue.range.len, flags); mmput(ctx->mm); } else { return -ESRCH; @@ -1925,9 +1930,8 @@ static inline int userfaultfd_poison(struct userfaultfd_ctx *ctx, unsigned long goto out; if (mmget_not_zero(ctx->mm)) { - ret = mfill_atomic_poison(ctx->mm, uffdio_poison.range.start, - uffdio_poison.range.len, - &ctx->mmap_changing, 0); + ret = mfill_atomic_poison(ctx, uffdio_poison.range.start, + uffdio_poison.range.len, 0); mmput(ctx->mm); } else { return -ESRCH; @@ -2003,13 +2007,14 @@ static int userfaultfd_move(struct userfaultfd_ctx *ctx, if (mmget_not_zero(mm)) { mmap_read_lock(mm); - /* Re-check after taking mmap_lock */ + /* Re-check after taking map_changing_lock */ + down_read(&ctx->map_changing_lock); if (likely(!atomic_read(&ctx->mmap_changing))) ret = move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src, uffdio_move.len, uffdio_move.mode); else ret = -EAGAIN; - + up_read(&ctx->map_changing_lock); mmap_read_unlock(mm); mmput(mm); } else { @@ -2216,6 +2221,7 @@ static int new_userfaultfd(int flags) ctx->flags = flags; ctx->features = 0; ctx->released = false; + init_rwsem(&ctx->map_changing_lock); atomic_set(&ctx->mmap_changing, 0); ctx->mm = current->mm; /* prevent the mm struct to be freed */ diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 691d928ee864..3210c3552976 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -69,6 +69,13 @@ struct userfaultfd_ctx { unsigned int features; /* released */ bool released; + /* + * Prevents userfaultfd operations (fill/move/wp) from happening while + * some non-cooperative event(s) is taking place. Increments are done + * in write-mode. Whereas, userfaultfd operations, which includes + * reading mmap_changing, is done under read-mode. + */ + struct rw_semaphore map_changing_lock; /* memory mappings are changing because of non-cooperative event */ atomic_t mmap_changing; /* mm with one ore more vmas attached to this userfaultfd_ctx */ @@ -113,22 +120,18 @@ extern int mfill_atomic_install_pte(pmd_t *dst_pmd, unsigned long dst_addr, struct page *page, bool newly_allocated, uffd_flags_t flags); -extern ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long dst_start, +extern ssize_t mfill_atomic_copy(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags); -extern ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, + uffd_flags_t flags); +extern ssize_t mfill_atomic_zeropage(struct userfaultfd_ctx *ctx, unsigned long dst_start, - unsigned long len, - atomic_t *mmap_changing); -extern ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long dst_start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags); -extern ssize_t mfill_atomic_poison(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags); -extern int mwriteprotect_range(struct mm_struct *dst_mm, - unsigned long start, unsigned long len, - bool enable_wp, atomic_t *mmap_changing); + unsigned long len); +extern ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned long dst_start, + unsigned long len, uffd_flags_t flags); +extern ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, uffd_flags_t flags); +extern int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, bool enable_wp); extern long uffd_wp_range(struct vm_area_struct *vma, unsigned long start, unsigned long len, bool enable_wp); diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 9cc93cc1330b..74aad0831e40 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -353,11 +353,11 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) * called with mmap_lock held, it will release mmap_lock before returning. */ static __always_inline ssize_t mfill_atomic_hugetlb( + struct userfaultfd_ctx *ctx, struct vm_area_struct *dst_vma, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags) { struct mm_struct *dst_mm = dst_vma->vm_mm; @@ -379,6 +379,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( * feature is not supported. */ if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); return -EINVAL; } @@ -463,6 +464,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( cond_resched(); if (unlikely(err == -ENOENT)) { + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); BUG_ON(!folio); @@ -473,12 +475,13 @@ static __always_inline ssize_t mfill_atomic_hugetlb( goto out; } mmap_read_lock(dst_mm); + down_read(&ctx->map_changing_lock); /* * If memory mappings are changing because of non-cooperative * operation (e.g. mremap) running in parallel, bail out and * request the user to retry later */ - if (mmap_changing && atomic_read(mmap_changing)) { + if (atomic_read(&ctx->mmap_changing)) { err = -EAGAIN; break; } @@ -501,6 +504,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( } out_unlock: + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); out: if (folio) @@ -512,11 +516,11 @@ static __always_inline ssize_t mfill_atomic_hugetlb( } #else /* !CONFIG_HUGETLB_PAGE */ /* fail at build time if gcc attempts to use this */ -extern ssize_t mfill_atomic_hugetlb(struct vm_area_struct *dst_vma, +extern ssize_t mfill_atomic_hugetlb(struct userfaultfd_ctx *ctx, + struct vm_area_struct *dst_vma, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags); #endif /* CONFIG_HUGETLB_PAGE */ @@ -564,13 +568,13 @@ static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd, return err; } -static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, +static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags) { + struct mm_struct *dst_mm = ctx->mm; struct vm_area_struct *dst_vma; ssize_t err; pmd_t *dst_pmd; @@ -600,8 +604,9 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, * operation (e.g. mremap) running in parallel, bail out and * request the user to retry later */ + down_read(&ctx->map_changing_lock); err = -EAGAIN; - if (mmap_changing && atomic_read(mmap_changing)) + if (atomic_read(&ctx->mmap_changing)) goto out_unlock; /* @@ -633,8 +638,8 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, * If this is a HUGETLB vma, pass off to appropriate routine */ if (is_vm_hugetlb_page(dst_vma)) - return mfill_atomic_hugetlb(dst_vma, dst_start, src_start, - len, mmap_changing, flags); + return mfill_atomic_hugetlb(ctx, dst_vma, dst_start, + src_start, len, flags); if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) goto out_unlock; @@ -693,6 +698,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, if (unlikely(err == -ENOENT)) { void *kaddr; + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); BUG_ON(!folio); @@ -723,6 +729,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, } out_unlock: + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); out: if (folio) @@ -733,34 +740,33 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, return copied ? copied : err; } -ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long dst_start, +ssize_t mfill_atomic_copy(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags) + uffd_flags_t flags) { - return mfill_atomic(dst_mm, dst_start, src_start, len, mmap_changing, + return mfill_atomic(ctx, dst_start, src_start, len, uffd_flags_set_mode(flags, MFILL_ATOMIC_COPY)); } -ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing) +ssize_t mfill_atomic_zeropage(struct userfaultfd_ctx *ctx, + unsigned long start, + unsigned long len) { - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, + return mfill_atomic(ctx, start, 0, len, uffd_flags_set_mode(0, MFILL_ATOMIC_ZEROPAGE)); } -ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags) +ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, uffd_flags_t flags) { - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, + return mfill_atomic(ctx, start, 0, len, uffd_flags_set_mode(flags, MFILL_ATOMIC_CONTINUE)); } -ssize_t mfill_atomic_poison(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags) +ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, uffd_flags_t flags) { - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, + return mfill_atomic(ctx, start, 0, len, uffd_flags_set_mode(flags, MFILL_ATOMIC_POISON)); } @@ -793,10 +799,10 @@ long uffd_wp_range(struct vm_area_struct *dst_vma, return ret; } -int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, bool enable_wp, - atomic_t *mmap_changing) +int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, bool enable_wp) { + struct mm_struct *dst_mm = ctx->mm; unsigned long end = start + len; unsigned long _start, _end; struct vm_area_struct *dst_vma; @@ -820,8 +826,9 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, * operation (e.g. mremap) running in parallel, bail out and * request the user to retry later */ + down_read(&ctx->map_changing_lock); err = -EAGAIN; - if (mmap_changing && atomic_read(mmap_changing)) + if (atomic_read(&ctx->mmap_changing)) goto out_unlock; err = -ENOENT; @@ -850,6 +857,7 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, err = 0; } out_unlock: + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); return err; }