From patchwork Thu Aug 3 17:26:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13340371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2A38EB64DD for ; Thu, 3 Aug 2023 17:27:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 43D06280283; Thu, 3 Aug 2023 13:27:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3ED3228022C; Thu, 3 Aug 2023 13:27:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2402A280283; Thu, 3 Aug 2023 13:27:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 0EC3F28022C for ; Thu, 3 Aug 2023 13:27:08 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id A23C21412EF for ; Thu, 3 Aug 2023 17:27:07 +0000 (UTC) X-FDA: 81083474094.11.37A150C Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf20.hostedemail.com (Postfix) with ESMTP id A0A871C0017 for ; Thu, 3 Aug 2023 17:27:05 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=3B64s3BV; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf20.hostedemail.com: domain of 3aOPLZAYKCIs796t2qv33v0t.r310x29C-11zAprz.36v@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3aOPLZAYKCIs796t2qv33v0t.r310x29C-11zAprz.36v@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1691083625; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=520TPAR42nWNvH3iwlw29nIZA75JEpTNGHc6Ua8ufv4=; b=wlHpt7mdhPUU4F9R0dx71o+7lfecK08rfMGdRcQTIDErMEJY/2Dpb3N8xW+kb9vLRbn9qW jc9MyS7ATfgVgHbmVnsBElhEgfWSHpn8GAmYLTGAJfcnnT+7RQFyX6bFSmiTKAhGCWvHF7 mKZZ733ok7lXDBinGiIu9KcLvJluFuI= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=3B64s3BV; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf20.hostedemail.com: domain of 3aOPLZAYKCIs796t2qv33v0t.r310x29C-11zAprz.36v@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3aOPLZAYKCIs796t2qv33v0t.r310x29C-11zAprz.36v@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1691083625; a=rsa-sha256; cv=none; b=31aqMlqClJlFTjfsP8FOwJyOl40f7bK5jNd6yXrcCJoo2k/rI/z32P5/wUGfqN6oXRks5u ITSJK8sSUvFT1/CF42w4FMasTQBLCvhC46DabEIwW+Q9r7Zw8gamOpW2fNJRaUP0W74muy FYnNIcbEVJTVRblAjoTtDDYBYzQ615Q= Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-d07cb52a768so1367251276.1 for ; Thu, 03 Aug 2023 10:27:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691083624; x=1691688424; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=520TPAR42nWNvH3iwlw29nIZA75JEpTNGHc6Ua8ufv4=; b=3B64s3BVubEniqIBf5qTeAuU3p0nbnmnuBYsB+RpN6AZa3i3GGRn+/I8B0TAlv2Nks vcgJasJ0d6hq72DZeM0Lcm55BQ6bd/hb6ifugNbYwHXuziylKqPLbEduqiIbOnQiwIfT g0u3Le+LiGigCWodBqgYIs5hzDhaFXFQSJpVsN0EpnORpmgSB4VWx1SGQLrVpR398SXM jj5YOFr/dtJESoBiTGX4JBxqkjM8c4oYCcIgTjq3iohH0dJbfezpHUzECnQsmKYwCHk1 6xSsPFv1N8/zqJ0kI7aEpFrdp9jeZJ3j7FBpEWsso+eK/G/Snk1NGW/K3h+S2znshr9F Fcxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691083624; x=1691688424; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=520TPAR42nWNvH3iwlw29nIZA75JEpTNGHc6Ua8ufv4=; b=RsT15kgyAzGz1kXfA7e69rEyEUtslXw7CqjvzQIaApjg4EsM1lM+Po5wKm/vICqo3q SoIYMr5M1/ncvELvS0pOhDHpzaRnM6gxd7GRksgTOJbG5hPbzGexfJfsgweHmPtSg+r2 aikhtdmVmexvJCmIgNyhNClbtPb5ONf9khRrWNui08z/FNZRzYkMh2Lg/+KRwZz2OfaB bocAZGmNYKM70Y9p3N4iFo01Mjz0PimbNIjr/E+ktzIyLc4YO3I5VA6PwFAOEKuDmBDi +JtCGYoQpkeWMh2a+cRYLR+hle8D95dLpOubPdaozDoKFXPgyUnAE7SZWxxMw8A0fCCa iPeA== X-Gm-Message-State: ABy/qLbaBlO5tK0Tjvv15YTSzoHc0AmO4KqU/Zet4qU3v6J8I7OL4yJd ZI3CJDa/N6Q6enVRZRqHjLEt0nJFDds= X-Google-Smtp-Source: APBJJlERLM1xcqSCyBewM6WMczSZdvf6/mfoOsnnM0je8oG7l6a+OZ1VwOuI0MHYyPuiy7m+zNnSmK0xJcM= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:3dec:ef9e:7bf4:7a6]) (user=surenb job=sendgmr) by 2002:a25:250a:0:b0:d0a:86fc:536c with SMTP id l10-20020a25250a000000b00d0a86fc536cmr131344ybl.2.1691083624748; Thu, 03 Aug 2023 10:27:04 -0700 (PDT) Date: Thu, 3 Aug 2023 10:26:49 -0700 In-Reply-To: <20230803172652.2849981-1-surenb@google.com> Mime-Version: 1.0 References: <20230803172652.2849981-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.585.gd2178a4bd4-goog Message-ID: <20230803172652.2849981-5-surenb@google.com> Subject: [PATCH v3 4/6] mm: lock vma explicitly before doing vm_flags_reset and vm_flags_reset_once From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: torvalds@linux-foundation.org, jannh@google.com, willy@infradead.org, liam.howlett@oracle.com, david@redhat.com, peterx@redhat.com, ldufour@linux.ibm.com, vbabka@suse.cz, michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, hannes@cmpxchg.org, dave@stgolabs.net, hughd@google.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, stable@vger.kernel.org, Suren Baghdasaryan , Linus Torvalds X-Rspamd-Queue-Id: A0A871C0017 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: ykme8xknh96hc1jd3tjz574w1wbmh5hs X-HE-Tag: 1691083625-87607 X-HE-Meta: U2FsdGVkX1/Tpis0pknUkoRzzE+5BWuAnOm7OWSJ5ipHwgCP/RQXauUwa9mX1J9OZE1XFEj/STFvs5phYEB7djG4/MjiJI1JOqyTwNCJl8pO8jsywQCUyK/nQ5Crks063Yszlj8ky08GGQZZj2Js+KdDhHT9i58jnUHvJcLYu9wfuQUR9ZSghcNpn6x3SqcqxUT+Ytx7i84oh7NvOYxmGlGPg2fjPvL4G6kozUjGIsZdLfOC7ixesM7KmoX34QlwTL0rkG3kKMmQiY8VY27efHaMpNQlqOzdB8nJsKaKl1oAgd6QYcm/8OY61IGBP6MWVZer2vAm2Jt36aRLMY5npTwHMGnREU64IU9qKU27gr839Ok3fcj2jbT1G+lfTfPFM4dH7lY1UZUFCM83U7bglRG6MtADPq0TPf8jhpO5iRTwhT/UQtpvFycxiFiqpHazq13qyv0BlyUkREcJxZlQ4oBj89dbdq/k0VkMaMmtJ/ERaISoPTnuqPEIthkbPBgbJntsr6D6GwSyW7zyzTPRB0ZL0QNJYJSPWaxF39s7YYX+DUJ42B7tr1EefujLBXDH6dpCyIC213xUgbCviLVncQSOFmmy+a/Gk1foMrqo+g47oaa8bDOAimx5bMvqe7ERmhWDraX57R/N1nzYRwAJBJdrMzmFem9Z+ZAcLJKbNANkWkEbR5ZQVobG5e228wnEAxM4mQQMPe8Rsnz4kWptOH6YcyIMXXGBZYg2VGgC80lortOnNzJ65jVQWlcc6+m6/kVnKcvWfCBeBM3q2na20RkQos0m6spzSpcpgdUQlzMq7Fgx0IH1R70XXuNWSZNaeWvOIu9xfBtJ6n6edMptwO/6nCgCF5Rxzxzf3Euvyg5YJAeZc6P/4Z6cT8TzcldXa3Xyi4yXb7r8ODinLGVqnLPprzstwOVQWkriTrbD05rpHmSSReFbrme2TbTTdkureGRwTaNl+5cCXkK+Ysd ue4mlNbO nuwKCL3VCNuKiRqYwBDmb6avP4CVr/+HvGeYx9NDzQG0fZTbl0V9vxKMV3fttYDv8TIAxfwodb/KNiapbtwxapnjT6RlEIXdRYjgov3MA2u7Slj58EvogGChWslhwjzxdTHYQ2hawP/d1Dyvq/VxAsctvLdAm6pXP+UNCzy6Bc7RJ1FE+b4pFFVTKYyHhA2OymdGj1RxjHRdSpQDPELABiZyHLWFSr9eg/ypcVTTh1imVjq9xCNAS8qip/NZY/JYFVQ1+I/Pho67aYJADC1q3bowhKSDGKthcoJ5W0BJv7VILw+9VsQ+9ncxfvDb9hN4AwqwnLVWDnyjSOgEcBjDuSgUXWiZwJEb6EkjRoHU9i8JA9fC2xx01TygUftJnNTu5aNig1OHEbQePLlarAoiH/WRh6dzIgP5U5eFLB75FdioA8s2WgFU/+ekZbqQmNnFcjHpeKUH2GZrU7AIcPGNwGzPWbsJSlrJRWqVC7GVBdDInndnNWrqYNKb0Y7QsEKj4+J/Jcz/8aYR0v43C7fiLcI7CS9OHnwO3MAJFYllxeR6+mQcymfmpasulYTp66tnSxEIzaB5rf/I3wqGTaQasjcsYcp3S7YzsoX8owM4l1eBqBhUvuYDI0Bl3duB1fXG8zMGhhZVTeQJWyDynPf0F62cmqY7ouJ5O6SdyDDA8CX1MS+xa6/8Wxzj3eMJj3rHx8kXqEMS4zpakg+XVlCDMPo7iHA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Implicit vma locking inside vm_flags_reset() and vm_flags_reset_once() is not obvious and makes it hard to understand where vma locking is happening. Also in some cases (like in dup_userfaultfd()) vma should be locked earlier than vma_flags modification. To make locking more visible, change these functions to assert that the vma write lock is taken and explicitly lock the vma beforehand. Fix userfaultfd functions which should lock the vma earlier. Suggested-by: Linus Torvalds Signed-off-by: Suren Baghdasaryan --- arch/powerpc/kvm/book3s_hv_uvmem.c | 1 + fs/userfaultfd.c | 6 ++++++ include/linux/mm.h | 10 +++++++--- mm/madvise.c | 5 ++--- mm/mlock.c | 3 ++- mm/mprotect.c | 1 + 6 files changed, 19 insertions(+), 7 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c index 709ebd578394..e2d6f9327f77 100644 --- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -410,6 +410,7 @@ static int kvmppc_memslot_page_merge(struct kvm *kvm, ret = H_STATE; break; } + vma_start_write(vma); /* Copy vm_flags to avoid partial modifications in ksm_madvise */ vm_flags = vma->vm_flags; ret = ksm_madvise(vma, vma->vm_start, vma->vm_end, diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 7cecd49e078b..6cde95533dcd 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -667,6 +667,7 @@ static void userfaultfd_event_wait_completion(struct userfaultfd_ctx *ctx, mmap_write_lock(mm); for_each_vma(vmi, vma) { if (vma->vm_userfaultfd_ctx.ctx == release_new_ctx) { + vma_start_write(vma); vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; userfaultfd_set_vm_flags(vma, vma->vm_flags & ~__VM_UFFD_FLAGS); @@ -702,6 +703,7 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) octx = vma->vm_userfaultfd_ctx.ctx; if (!octx || !(octx->features & UFFD_FEATURE_EVENT_FORK)) { + vma_start_write(vma); vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; userfaultfd_set_vm_flags(vma, vma->vm_flags & ~__VM_UFFD_FLAGS); return 0; @@ -783,6 +785,7 @@ void mremap_userfaultfd_prep(struct vm_area_struct *vma, atomic_inc(&ctx->mmap_changing); } else { /* Drop uffd context if remap feature not enabled */ + vma_start_write(vma); vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; userfaultfd_set_vm_flags(vma, vma->vm_flags & ~__VM_UFFD_FLAGS); } @@ -940,6 +943,7 @@ static int userfaultfd_release(struct inode *inode, struct file *file) prev = vma; } + vma_start_write(vma); userfaultfd_set_vm_flags(vma, new_flags); vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; } @@ -1502,6 +1506,7 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx, * the next vma was merged into the current one and * the current one has not been updated yet. */ + vma_start_write(vma); userfaultfd_set_vm_flags(vma, new_flags); vma->vm_userfaultfd_ctx.ctx = ctx; @@ -1685,6 +1690,7 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx, * the next vma was merged into the current one and * the current one has not been updated yet. */ + vma_start_write(vma); userfaultfd_set_vm_flags(vma, new_flags); vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; diff --git a/include/linux/mm.h b/include/linux/mm.h index 262b5f44101d..2c720c9bb1ae 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -780,18 +780,22 @@ static inline void vm_flags_init(struct vm_area_struct *vma, ACCESS_PRIVATE(vma, __vm_flags) = flags; } -/* Use when VMA is part of the VMA tree and modifications need coordination */ +/* + * Use when VMA is part of the VMA tree and modifications need coordination + * Note: vm_flags_reset and vm_flags_reset_once do not lock the vma and + * it should be locked explicitly beforehand. + */ static inline void vm_flags_reset(struct vm_area_struct *vma, vm_flags_t flags) { - vma_start_write(vma); + vma_assert_write_locked(vma); vm_flags_init(vma, flags); } static inline void vm_flags_reset_once(struct vm_area_struct *vma, vm_flags_t flags) { - vma_start_write(vma); + vma_assert_write_locked(vma); WRITE_ONCE(ACCESS_PRIVATE(vma, __vm_flags), flags); } diff --git a/mm/madvise.c b/mm/madvise.c index bfe0e06427bd..507b1d299fec 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -173,9 +173,8 @@ static int madvise_update_vma(struct vm_area_struct *vma, } success: - /* - * vm_flags is protected by the mmap_lock held in write mode. - */ + /* vm_flags is protected by the mmap_lock held in write mode. */ + vma_start_write(vma); vm_flags_reset(vma, new_flags); if (!vma->vm_file || vma_is_anon_shmem(vma)) { error = replace_anon_vma_name(vma, anon_name); diff --git a/mm/mlock.c b/mm/mlock.c index 479e09d0994c..06bdfab83b58 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -387,6 +387,7 @@ static void mlock_vma_pages_range(struct vm_area_struct *vma, */ if (newflags & VM_LOCKED) newflags |= VM_IO; + vma_start_write(vma); vm_flags_reset_once(vma, newflags); lru_add_drain(); @@ -461,9 +462,9 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma, * It's okay if try_to_unmap_one unmaps a page just after we * set VM_LOCKED, populate_vma_page_range will bring it back. */ - if ((newflags & VM_LOCKED) && (oldflags & VM_LOCKED)) { /* No work to do, and mlocking twice would be wrong */ + vma_start_write(vma); vm_flags_reset(vma, newflags); } else { mlock_vma_pages_range(vma, start, end, newflags); diff --git a/mm/mprotect.c b/mm/mprotect.c index 3aef1340533a..362e190a8f81 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -657,6 +657,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb, * vm_flags and vm_page_prot are protected by the mmap_lock * held in write mode. */ + vma_start_write(vma); vm_flags_reset(vma, newflags); if (vma_wants_manual_pte_write_upgrade(vma)) mm_cp_flags |= MM_CP_TRY_CHANGE_WRITABLE;