From patchwork Thu Dec 26 17:06:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13921382 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01191E7718F for ; Thu, 26 Dec 2024 17:07:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 17C166B0083; Thu, 26 Dec 2024 12:07:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0DE676B0085; Thu, 26 Dec 2024 12:07:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E729A6B0088; Thu, 26 Dec 2024 12:07:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id C79956B0083 for ; Thu, 26 Dec 2024 12:07:18 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 704871610AC for ; Thu, 26 Dec 2024 17:07:18 +0000 (UTC) X-FDA: 82937740578.14.453AF24 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf20.hostedemail.com (Postfix) with ESMTP id 2EE101C000F for ; Thu, 26 Dec 2024 17:06:32 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=JsCkxbTw; spf=pass (imf20.hostedemail.com: domain of 3Q41tZwYKCFYGIF2Bz4CC492.0CA96BIL-AA8Jy08.CF4@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3Q41tZwYKCFYGIF2Bz4CC492.0CA96BIL-AA8Jy08.CF4@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735232817; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lZeoQJ48TvPY6kfbB0nyOCq9qEQQ9w4ETlSFBxsMiUM=; b=0nEphQNtjkcjDxwCnqQIaRp3MV4cvubpWqf9sIr3p45Kg9YQegl2vk1IxaFMKWV4sS7hYE +hO5j7ckakN/LkfcpxVRBAL2hxQtrKHz3BY0tJNSqwOr+qMGfM4BaTpVEzPgUxeyPOM6iS 0A5PKuiOithsZv5KdSUXW3FxYmXwqfo= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=JsCkxbTw; spf=pass (imf20.hostedemail.com: domain of 3Q41tZwYKCFYGIF2Bz4CC492.0CA96BIL-AA8Jy08.CF4@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3Q41tZwYKCFYGIF2Bz4CC492.0CA96BIL-AA8Jy08.CF4@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735232817; a=rsa-sha256; cv=none; b=IXk2/nBYpzv/YCN8Vq/+OajgVVRvZYpRK6svriAZ4Ve7RZkMrZWIu8PGGYZK1q6bvw9Kki fSKYYDjg//5p6Av4xAjcbpReW9wnzct9OG+LEZ3dK+UzRzLYoh4QAqpwJZ5iBKrDb23MdT ma9IT/vZAtlaf/A5BeByeOUKee9kiK0= Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-21631cbf87dso73144945ad.3 for ; Thu, 26 Dec 2024 09:07:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1735232835; x=1735837635; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lZeoQJ48TvPY6kfbB0nyOCq9qEQQ9w4ETlSFBxsMiUM=; b=JsCkxbTwrd3Vg7lKGmR7gUWX+bvx2T/zW9W7yeEIY3eaqYETH+Wj56RJN01/A9ds6A xo+9NNFT1+D7RnbjE1jkPSvsKMkUZR0z/gU+MEk6NYn9ENDLVbFDnRIsPywbNG+DDnze ushhHrngJKN2/s1qOXdKg0ad5LrUxwFYmUDpRQKl6LuM9Xpor7aNIJpcieP5IpS7t1fC N+dbH1s7/URzwS/4VT/pngpaGBw+TN/JWpjTf0w0E9MUVbpLGwcrT2i8aUeoDbxYYlaj VBxNJ5dU0k/ZgD3n7CfuhLdSeUEGScRwVOR2pq5cGVsP6iA25jq6ZZpI8jFdu9x4d/C7 qf5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735232835; x=1735837635; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lZeoQJ48TvPY6kfbB0nyOCq9qEQQ9w4ETlSFBxsMiUM=; b=OqqggNmfRw1YEtZ7zlSOonX+uyrM720aeiMHmZWbKdVprVXBB6yovomXNTBrdSuppq XZ8v9qwbWYEMXD2f57ycMrQQ/8NFiltvrnnGFkIRZ5Hf63aYGSo5URcOvWk7o79JHGm0 ihAQH1eeWSAjuodtcC+SW0vDz/WcQgUe+pqpEqtGQhk++ioG4xkS2YcJ2sdgzGWKevfO quG3TtePhwbnY5P6otO41uPf6/BYuL8T+6ZcD9Ja5mPIQZ+RzLnmdsn6rfJlpHCVTTpM Ld21Vp76v132gWiQvOlepmLsQdNkzDH6c7niGGS8a3Cu+aaa4cf7H3Xxz64HC3o0NZc4 ag6A== X-Forwarded-Encrypted: i=1; AJvYcCVVzKj3+ewn3YJ+fD/zu7l26hjd6v6Yfjp4dlxFw78D0d84hPaGHOX7VTzcRh16wnENlzXT5nvYug==@kvack.org X-Gm-Message-State: AOJu0YzRQKNjrB8EzKBHyyTGaHXs+Z5+Aj+hCeKihIcrNDL0LoG3L/vI Bz/aJ5VrmTg4nF4ljjD52pzcfG8TEDI6clF1AN2KYpIfSBCvQGnnresfoUss1cGlNfMolksqd+b TYw== X-Google-Smtp-Source: AGHT+IGWdbrm7iCFwz72a6IFmBjvmmjFFdB3W+njMZ9BDPMzH23Ov9tsL+sC5puGq870a2X6Uofv9F0zU/g= X-Received: from plty1.prod.google.com ([2002:a17:902:8641:b0:216:2f96:8ba1]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:d587:b0:216:5af7:eb2a with SMTP id d9443c01a7336-219e6ebb70bmr361013645ad.33.1735232835405; Thu, 26 Dec 2024 09:07:15 -0800 (PST) Date: Thu, 26 Dec 2024 09:06:53 -0800 In-Reply-To: <20241226170710.1159679-1-surenb@google.com> Mime-Version: 1.0 References: <20241226170710.1159679-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241226170710.1159679-2-surenb@google.com> Subject: [PATCH v7 01/17] mm: introduce vma_start_read_locked{_nested} helpers From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Queue-Id: 2EE101C000F X-Stat-Signature: 1frfpjwkkq731ydgq9qjw8id7h7rgzwu X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1735232792-434625 X-HE-Meta: U2FsdGVkX19dA/k7QBQPbqzNh+xQBbBLLOe2iq584KGYuHkGIp16nLlUau7vcZ17j9yaKQorNc3z12ADN5PCa9bDhAH48BDm1mwfzcwZHHmO39jtowB0k0HupsiKYcr2vjhXrY/qXG8HncYc4+5VB4lIVVJjFSSaTzvKrsRobXlLDjwPWLt98uZr/pQGgXG2bTtSMB6roI8f5wHDcFFcZNenJnB5uqTrqGlvJ1+XQy9UEtm6dHpndjVi3nNhvS55yIgOL/zFxle+eho5OX/5gL71oC9rXjdthO+rH6ov8mLAPoAsmxddiT9EsBdw10zXx3tm2VE5dLtaJEy8hhKChSHS8Dib63tp1Ceiwq7qkFnRoc9lQMdZPMk2f+gyCHqXbUT1Jhe7ZSY5XB9QK+7+0wtKzSj+W3EGvxVMqYs1HFkdZv9Ew3v9iMiu1/7Jsf2YLJ4P4PStw5Dnh2CZUfQbbQWGYJDHUs4hk9TwjTaYvSCwbZiXJt3ug1mapdQrCpBtPZgdt30K70TsHOSvRZ8YCK3M8vvUimMt5d+YPt+ysVJjzI/86GeHljvQqil7Ahujr45tRlQMH68P6IuPh/VCbTqzuXNblRZdcd0+xsCG3pYyUV+FSIZeWzQRrFOs+v3J02xa2RxAey7WJ2gJ3P0jB0dbptu9r3FY9xT78yj/8SXGtSXpGK6SaFKZLVlQ4EAQROY6dYxLLhVpoyfAsA4NuLsU456RhHPxKbBsLwjx+3TmQ5rzJrySIVlW4mb9oMto4gMt+JYZV/+2pQOhoIwcawRUvYV+ONjQwnxgGsWyRmO6JAiYMG3Hg29b+A1KREZNewdrO/mwbxk7J1oryrQpA+kU4gl0tkr0dCVP8jE+RMvCqEAFPyhE74UJSHy8zNQHvfNsuLsniMUMFS7LY0QTdXirQCrekY8bE15DR0wJR44KfCCcxw0oLt93yTAzuEaf1EobOzx4m8vFgHqBU4B qe2iGz1r I8LxSZ+dYnszcuW3+ZyvKxtwn0GBH7DL9NVQJcXeeaB4HfTiv2XXwpexksUXbWn6w+yBbvGAem29H2axU94CpgO6cZDLNkGiWqX+bnElrvnvqCq2coDoNYjBbi1YzB+/3lVGnxNEf/G28Hgk7+9pLpCTT+cWH5V5+42qDN6iTvPRgUtGS0yNuATDzqTh8lxvaerpsSZRbxQQlD9Zo5tuPUtcu2foPBuoxYBfkiWqKREHPV8XghM88CKVzkykw42GvhDkBCAVpRN/wKfjcpEMSAcRiCZTzpygxyRDOsLH6F0U+RZtAYWOK3f038GfLDT2trjctL1dPdf0kOsRsEvFOy87o+OsBJvFwVOHX77xE755V/7Tn2X75SHKLygEVLuq+K8AqJZC9GVLjLcNzPobyYK0c16+2bruDsq9X2GbgrKKQsx205PFtGsSR0vF61rhOfU8uzklP4Vk7X3eHwNr6tINR6LWjav7tAC9KstKDYghAc/fopqJA34E1+RwIkv40Z1/x0Ni5Oa6KBY3LMSWv5TGOsRBJMWW1iUgE4MkQ+rEhgW4jSO7FAvkgFbL8RF2fJMMuoJaNP+PVvusJkRhLmSHZdG18VWz8D1Xq43xtgbsDOu/SRSJhjgWVYx7AibP8xcL9AKpIO2wauFEnveVKTWZpMzd494mA1OkdzlQGy11TVOw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Introduce helper functions which can be used to read-lock a VMA when holding mmap_lock for read. Replace direct accesses to vma->vm_lock with these new helpers. Signed-off-by: Suren Baghdasaryan Reviewed-by: Lorenzo Stoakes Reviewed-by: Davidlohr Bueso Reviewed-by: Shakeel Butt Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 24 ++++++++++++++++++++++++ mm/userfaultfd.c | 22 +++++----------------- 2 files changed, 29 insertions(+), 17 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 406b981af881..a48e207d25f2 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -735,6 +735,30 @@ static inline bool vma_start_read(struct vm_area_struct *vma) return true; } +/* + * Use only while holding mmap read lock which guarantees that locking will not + * fail (nobody can concurrently write-lock the vma). vma_start_read() should + * not be used in such cases because it might fail due to mm_lock_seq overflow. + * This functionality is used to obtain vma read lock and drop the mmap read lock. + */ +static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass) +{ + mmap_assert_locked(vma->vm_mm); + down_read_nested(&vma->vm_lock->lock, subclass); +} + +/* + * Use only while holding mmap read lock which guarantees that locking will not + * fail (nobody can concurrently write-lock the vma). vma_start_read() should + * not be used in such cases because it might fail due to mm_lock_seq overflow. + * This functionality is used to obtain vma read lock and drop the mmap read lock. + */ +static inline void vma_start_read_locked(struct vm_area_struct *vma) +{ + mmap_assert_locked(vma->vm_mm); + down_read(&vma->vm_lock->lock); +} + static inline void vma_end_read(struct vm_area_struct *vma) { rcu_read_lock(); /* keeps vma alive till the end of up_read */ diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index af3dfc3633db..4527c385935b 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -84,16 +84,8 @@ static struct vm_area_struct *uffd_lock_vma(struct mm_struct *mm, mmap_read_lock(mm); vma = find_vma_and_prepare_anon(mm, address); - if (!IS_ERR(vma)) { - /* - * We cannot use vma_start_read() as it may fail due to - * false locked (see comment in vma_start_read()). We - * can avoid that by directly locking vm_lock under - * mmap_lock, which guarantees that nobody can lock the - * vma for write (vma_start_write()) under us. - */ - down_read(&vma->vm_lock->lock); - } + if (!IS_ERR(vma)) + vma_start_read_locked(vma); mmap_read_unlock(mm); return vma; @@ -1491,14 +1483,10 @@ static int uffd_move_lock(struct mm_struct *mm, mmap_read_lock(mm); err = find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); if (!err) { - /* - * See comment in uffd_lock_vma() as to why not using - * vma_start_read() here. - */ - down_read(&(*dst_vmap)->vm_lock->lock); + vma_start_read_locked(*dst_vmap); if (*dst_vmap != *src_vmap) - down_read_nested(&(*src_vmap)->vm_lock->lock, - SINGLE_DEPTH_NESTING); + vma_start_read_locked_nested(*src_vmap, + SINGLE_DEPTH_NESTING); } mmap_read_unlock(mm); return err; From patchwork Thu Dec 26 17:06:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13921383 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 512E0E7718E for ; Thu, 26 Dec 2024 17:07:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C513F6B0088; Thu, 26 Dec 2024 12:07:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B3E876B0089; Thu, 26 Dec 2024 12:07:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B8786B008A; Thu, 26 Dec 2024 12:07:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 70D4E6B0088 for ; Thu, 26 Dec 2024 12:07:21 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id E990C81127 for ; Thu, 26 Dec 2024 17:07:20 +0000 (UTC) X-FDA: 82937739108.19.A2CB69F Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf26.hostedemail.com (Postfix) with ESMTP id 20CEA14001B for ; Thu, 26 Dec 2024 17:06:46 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=0UiOplix; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of 3RY1tZwYKCFgIKH4D16EE6B4.2ECB8DKN-CCAL02A.EH6@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3RY1tZwYKCFgIKH4D16EE6B4.2ECB8DKN-CCAL02A.EH6@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735232799; a=rsa-sha256; cv=none; b=n0cVkCKbFbbtFxPN5k2CSFHCReOukalnlp299fYf6q1NRTc4hqpHc2GkK4WA0W5uIU47Hy l/FtCyl82vXmoxoIyR3inCz4Tz+gMZq+SpUhPK3icUPtNPDnFh1Vb4Adc3O7x8CDYAG8EE BKdabLYY7YixB8KPdeXgwpehAmRd5SU= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=0UiOplix; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of 3RY1tZwYKCFgIKH4D16EE6B4.2ECB8DKN-CCAL02A.EH6@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3RY1tZwYKCFgIKH4D16EE6B4.2ECB8DKN-CCAL02A.EH6@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735232799; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6NbeBWVE54m5pl9tXwNyg1Wb6fYDZqP7j0x+BaAe+lA=; b=W4Zv8aF5e+JvB61dT5rc+fwSCn1tO/B76b2fFzzcGjpCzvDUCWAMIfeCQPXgACiMBssHX0 tCL6y2iDP8B9MqalVaiBTJQKWx3kuvfRTHbdx06Be9dzjLlxQ0UbHyuYx85emm1CF418We hIfzlx8QCYP6uEZ+0ZPV97TbWMsFFwA= Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-21638389f63so72142455ad.1 for ; Thu, 26 Dec 2024 09:07:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1735232838; x=1735837638; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6NbeBWVE54m5pl9tXwNyg1Wb6fYDZqP7j0x+BaAe+lA=; b=0UiOplixnpMhDws3YdGXLH3rIsYKkKZYoL2CCZCQVVQw4HlSER8gDZkpzhHYy6F1CA hr5i0ltxIE0AtX4vKt/WBE1whd8X3XLzUSk97sZhorsfjlF2Z7w7pBEZUSrrzLjm8k0F jzy7eSQX4uCb3vg76cauKWW8noZYgyjYuUzEz/7MHeCV2PDzFi5HJ1z5cLv9DWInyLHz Krdxj7NYwqgP92VrniA7IOtT+5GnbxnHC0h8/jNRRjPqfoSA+7vSI7T3DVfXiQ/aFhEn MWeW8ezwx+NlIs0th4Z/sNqAdrWah69KEcXJKMeizqSG3xDb5MqAc25C2XCE0icDAPFk nQ0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735232838; x=1735837638; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6NbeBWVE54m5pl9tXwNyg1Wb6fYDZqP7j0x+BaAe+lA=; b=VihU3//6xLUB8BF7vK/uqRI/8u3+fkbzANbPBEeoOHZebFNC2CrY4oRjhqMHt53U2f 1oRFacTMvG0mIDg8bTiBdgOBhk8OlZZG23m9MD1GYNPCQPuc4MXQVpxI2lVBoULJGXqj Drxf+yyI1Kmmp5fAUBprA9mL5mnKMrKSybmMuKBbcbV2bPLIwItRCHdNLq/jZDC8Eg/k CCFdDX9q2UrXnhQR4nD/PHUPbWuGm8Xu0sul23fTGQWkY+rrVdDDVEUh5+MthD/h1oHs 2vcuajDYIvAT7lKPxDpTg/GuhE5WoxVEFXbwX/YebJEc24T1Eiq/GFGLyr6/q3RoRTLw zCCA== X-Forwarded-Encrypted: i=1; AJvYcCW1oyEMIk2UaJTeYJI48gexkc99edEYU+0wgY9qORfYGjgi7yKzBF+Yzzwtf5fN2+so7LbuHPZl/g==@kvack.org X-Gm-Message-State: AOJu0Yx+ZAwcQh8JnMOtNzs7pn1rAt0+BmJuaQHOZ9BtPdMkInIYZ9J6 1K59Om4EhNGtcjJ3ynnuzv4WfSuzzXb9wNCZ7xm2OH7EVRCg/QyxvP0GHNJ8SUWB9zPqo7dloOP sjA== X-Google-Smtp-Source: AGHT+IF0YYbhM7XYApspOk/uURmaoeCLW8NQlVpD5NmAe0m1TWKTebPNW6STDsV9ROEbHXxPc4+2zc362YA= X-Received: from pfbb7.prod.google.com ([2002:a05:6a00:ac87:b0:728:c6d8:5683]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:3991:b0:1db:ed8a:a607 with SMTP id adf61e73a8af0-1e5e047b457mr38977051637.11.1735232837659; Thu, 26 Dec 2024 09:07:17 -0800 (PST) Date: Thu, 26 Dec 2024 09:06:54 -0800 In-Reply-To: <20241226170710.1159679-1-surenb@google.com> Mime-Version: 1.0 References: <20241226170710.1159679-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241226170710.1159679-3-surenb@google.com> Subject: [PATCH v7 02/17] mm: move per-vma lock into vm_area_struct From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Stat-Signature: ixb9tohukkf1b9xpbp3geye8dh9jsibj X-Rspam-User: X-Rspamd-Queue-Id: 20CEA14001B X-Rspamd-Server: rspam08 X-HE-Tag: 1735232806-721772 X-HE-Meta: U2FsdGVkX1+OI1Abm/pGIzNgnRe5nIYedtF4yU+JkCGFxshn8X9SOtLVVqlOUz7dMyhe8klCTxytp8Dz8sKSBESz+NUklbKHlU/Oz0y+n8bPUC8dWGXCxwZB7LNL5HrZ1OqxqNqtQtHYzZwVxji7DquY31PjDEWV06lkfRlgTdoywuZHmEokNcXE4BvCYHhsJB8Nx893tLjMM00AjGD1RdUAZZTC4sq4C5atNAg1JL8CfMAgXExXKW/N3rvp0pFHo9/AJpuT9ARay1VID3AZnjIJBDFpmKJG3gNu3WINmTxxK8Wb9neMq4vsgb2fSEt8VM2/v9IdHOlEdeSbQiglGak/dfgKyw2MFHTjiqHsxK09kdfS6+YBqIHkI+RGZe1k2ufTs3ZTM736dy8f+8KFleZPMSBz/W/2fQF0GizlaXvu4qFaUp4Fatqkkw+Uuyd2W7g8jRoduGPIPK0v3zGKvCd0kZdHvf995HgXkQmt0ip4h0DTEYJy1+r/q97+R1/rDFqbWRln8GhSNJBaBp1yVSmy/AthHEyingeErHYo2WsC7UUmVikNjPc6yk1ikA80ubef/Rjp1rP6pYhLufnx5FqxOO7mIRqjFJ6xSTVxUpxbn6nyHmj2IyCu6hGMIFph5NBR3J3mRRzPWNwEzDec4esEYZEjKt32Yn0okQdN2qHQLCj6iF8s98zI4NOfOtTiIuoEqbjGe4K+O+nGduo10lIgP/OMIv/Jv9McMEFJFPOCq08iU2ZqYyzmay8cj/dtChRbWoGcBySdigtMtqc8r11L13Wg8M2lR3QcqzNZFDui17znjWYYVvlU517A9aNP4p+uZqIiVxobvzIioFBjpx7bgg0Aa0J8Xm3fnKNOjcD/DMrk1fBURgWRgmX/mCfChgxp1eVLzjXgbfDHsZUEwwpSiwUH4W3OwMHHyJbXl8guSRSFqiBeFaxcxv0E+ZCjrr5dipJtYy+YMP5VfKb ZTwGZt24 lcWDjba0F5yCouN9FX9cQ0sDLNPR31bipIMGm8nrAn6YyHEoGJoIMLKLXga8niwbpKnQpf7tF5o0aZGGxosXJMVElAzcdoovq0nxKbEOi6JWHxHCdRT8PCczCuTVDoPPOaJ7/mQxD42+GWRHuk069pyuJYmtT4NmL7AVbzMKt4tTUAvdWmGypYsJdH/9JdOdauGXYHNvuXHc14NUgr4AE9rC8+lW9H8Bprqy7LlMQApgAudJz5wQRZfnuE5qYKo180tu4ZtY2IM0xrMmANn/u/PokMlBihPuPYjQA2Xif0cfQju+4sLuKhAObwA5LnrWyeJmIO926DlQ3/M0/XvjCenPuTTCaXijOKcCebwrmkQh50yguVVBgwhzAp7sypkF452dIqxBHSkxdtK/8W9pm9Ogfq7YFRZz++JPVlERjAciVmlywiiA44ZWEpE38IdrkgrRLleFUcK7d8VGWCE+r7jdpeBXvaw08i/Qh4pQe5yRMRjPVZV6aE723m1Ex6HD36qVdai0PRI9JPcBwLZwLp3DYNFhXDTlCUcbFL/GA67Zrsrl5ZN1S30INBeMCuZNbWVigF6Qu56E6In/2zX0qXynxNaMPifDbKZRV5cHQeUDHVemtdENXLSrhicSsNJuyu64XvAtpZnyX5t4Z+DiyIsuxvHp2V2HRXHEz X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Back when per-vma locks were introduces, vm_lock was moved out of vm_area_struct in [1] because of the performance regression caused by false cacheline sharing. Recent investigation [2] revealed that the regressions is limited to a rather old Broadwell microarchitecture and even there it can be mitigated by disabling adjacent cacheline prefetching, see [3]. Splitting single logical structure into multiple ones leads to more complicated management, extra pointer dereferences and overall less maintainable code. When that split-away part is a lock, it complicates things even further. With no performance benefits, there are no reasons for this split. Merging the vm_lock back into vm_area_struct also allows vm_area_struct to use SLAB_TYPESAFE_BY_RCU later in this patchset. Move vm_lock back into vm_area_struct, aligning it at the cacheline boundary and changing the cache to be cacheline-aligned as well. With kernel compiled using defconfig, this causes VMA memory consumption to grow from 160 (vm_area_struct) + 40 (vm_lock) bytes to 256 bytes: slabinfo before: ... : ... vma_lock ... 40 102 1 : ... vm_area_struct ... 160 51 2 : ... slabinfo after moving vm_lock: ... : ... vm_area_struct ... 256 32 2 : ... Aggregate VMA memory consumption per 1000 VMAs grows from 50 to 64 pages, which is 5.5MB per 100000 VMAs. Note that the size of this structure is dependent on the kernel configuration and typically the original size is higher than 160 bytes. Therefore these calculations are close to the worst case scenario. A more realistic vm_area_struct usage before this change is: ... : ... vma_lock ... 40 102 1 : ... vm_area_struct ... 176 46 2 : ... Aggregate VMA memory consumption per 1000 VMAs grows from 54 to 64 pages, which is 3.9MB per 100000 VMAs. This memory consumption growth can be addressed later by optimizing the vm_lock. [1] https://lore.kernel.org/all/20230227173632.3292573-34-surenb@google.com/ [2] https://lore.kernel.org/all/ZsQyI%2F087V34JoIt@xsang-OptiPlex-9020/ [3] https://lore.kernel.org/all/CAJuCfpEisU8Lfe96AYJDZ+OM4NoPmnw9bP53cT_kbfP_pR+-2g@mail.gmail.com/ Signed-off-by: Suren Baghdasaryan Reviewed-by: Lorenzo Stoakes Reviewed-by: Shakeel Butt Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 28 ++++++++++-------- include/linux/mm_types.h | 6 ++-- kernel/fork.c | 49 ++++---------------------------- tools/testing/vma/vma_internal.h | 33 +++++---------------- 4 files changed, 32 insertions(+), 84 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a48e207d25f2..f3f92ba8f5fe 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -697,6 +697,12 @@ static inline void vma_numab_state_free(struct vm_area_struct *vma) {} #endif /* CONFIG_NUMA_BALANCING */ #ifdef CONFIG_PER_VMA_LOCK +static inline void vma_lock_init(struct vm_area_struct *vma) +{ + init_rwsem(&vma->vm_lock.lock); + vma->vm_lock_seq = UINT_MAX; +} + /* * Try to read-lock a vma. The function is allowed to occasionally yield false * locked result to avoid performance overhead, in which case we fall back to @@ -714,7 +720,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence)) return false; - if (unlikely(down_read_trylock(&vma->vm_lock->lock) == 0)) + if (unlikely(down_read_trylock(&vma->vm_lock.lock) == 0)) return false; /* @@ -729,7 +735,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * This pairs with RELEASE semantics in vma_end_write_all(). */ if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { - up_read(&vma->vm_lock->lock); + up_read(&vma->vm_lock.lock); return false; } return true; @@ -744,7 +750,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass) { mmap_assert_locked(vma->vm_mm); - down_read_nested(&vma->vm_lock->lock, subclass); + down_read_nested(&vma->vm_lock.lock, subclass); } /* @@ -756,13 +762,13 @@ static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int static inline void vma_start_read_locked(struct vm_area_struct *vma) { mmap_assert_locked(vma->vm_mm); - down_read(&vma->vm_lock->lock); + down_read(&vma->vm_lock.lock); } static inline void vma_end_read(struct vm_area_struct *vma) { rcu_read_lock(); /* keeps vma alive till the end of up_read */ - up_read(&vma->vm_lock->lock); + up_read(&vma->vm_lock.lock); rcu_read_unlock(); } @@ -791,7 +797,7 @@ static inline void vma_start_write(struct vm_area_struct *vma) if (__is_vma_write_locked(vma, &mm_lock_seq)) return; - down_write(&vma->vm_lock->lock); + down_write(&vma->vm_lock.lock); /* * We should use WRITE_ONCE() here because we can have concurrent reads * from the early lockless pessimistic check in vma_start_read(). @@ -799,7 +805,7 @@ static inline void vma_start_write(struct vm_area_struct *vma) * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. */ WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); - up_write(&vma->vm_lock->lock); + up_write(&vma->vm_lock.lock); } static inline void vma_assert_write_locked(struct vm_area_struct *vma) @@ -811,7 +817,7 @@ static inline void vma_assert_write_locked(struct vm_area_struct *vma) static inline void vma_assert_locked(struct vm_area_struct *vma) { - if (!rwsem_is_locked(&vma->vm_lock->lock)) + if (!rwsem_is_locked(&vma->vm_lock.lock)) vma_assert_write_locked(vma); } @@ -844,6 +850,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, #else /* CONFIG_PER_VMA_LOCK */ +static inline void vma_lock_init(struct vm_area_struct *vma) {} static inline bool vma_start_read(struct vm_area_struct *vma) { return false; } static inline void vma_end_read(struct vm_area_struct *vma) {} @@ -878,10 +885,6 @@ static inline void assert_fault_locked(struct vm_fault *vmf) extern const struct vm_operations_struct vma_dummy_vm_ops; -/* - * WARNING: vma_init does not initialize vma->vm_lock. - * Use vm_area_alloc()/vm_area_free() if vma needs locking. - */ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) { memset(vma, 0, sizeof(*vma)); @@ -890,6 +893,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) INIT_LIST_HEAD(&vma->anon_vma_chain); vma_mark_detached(vma, false); vma_numab_state_init(vma); + vma_lock_init(vma); } /* Use when VMA is not part of the VMA tree and needs no locking */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5f1b2dc788e2..6573d95f1d1e 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -730,8 +730,6 @@ struct vm_area_struct { * slowpath. */ unsigned int vm_lock_seq; - /* Unstable RCU readers are allowed to read this. */ - struct vma_lock *vm_lock; #endif /* @@ -784,6 +782,10 @@ struct vm_area_struct { struct vma_numab_state *numab_state; /* NUMA Balancing state */ #endif struct vm_userfaultfd_ctx vm_userfaultfd_ctx; +#ifdef CONFIG_PER_VMA_LOCK + /* Unstable RCU readers are allowed to read this. */ + struct vma_lock vm_lock ____cacheline_aligned_in_smp; +#endif } __randomize_layout; #ifdef CONFIG_NUMA diff --git a/kernel/fork.c b/kernel/fork.c index ded49f18cd95..40a8e615499f 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -436,35 +436,6 @@ static struct kmem_cache *vm_area_cachep; /* SLAB cache for mm_struct structures (tsk->mm) */ static struct kmem_cache *mm_cachep; -#ifdef CONFIG_PER_VMA_LOCK - -/* SLAB cache for vm_area_struct.lock */ -static struct kmem_cache *vma_lock_cachep; - -static bool vma_lock_alloc(struct vm_area_struct *vma) -{ - vma->vm_lock = kmem_cache_alloc(vma_lock_cachep, GFP_KERNEL); - if (!vma->vm_lock) - return false; - - init_rwsem(&vma->vm_lock->lock); - vma->vm_lock_seq = UINT_MAX; - - return true; -} - -static inline void vma_lock_free(struct vm_area_struct *vma) -{ - kmem_cache_free(vma_lock_cachep, vma->vm_lock); -} - -#else /* CONFIG_PER_VMA_LOCK */ - -static inline bool vma_lock_alloc(struct vm_area_struct *vma) { return true; } -static inline void vma_lock_free(struct vm_area_struct *vma) {} - -#endif /* CONFIG_PER_VMA_LOCK */ - struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) { struct vm_area_struct *vma; @@ -474,10 +445,6 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) return NULL; vma_init(vma, mm); - if (!vma_lock_alloc(vma)) { - kmem_cache_free(vm_area_cachep, vma); - return NULL; - } return vma; } @@ -496,10 +463,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) * will be reinitialized. */ data_race(memcpy(new, orig, sizeof(*new))); - if (!vma_lock_alloc(new)) { - kmem_cache_free(vm_area_cachep, new); - return NULL; - } + vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); vma_numab_state_init(new); dup_anon_vma_name(orig, new); @@ -511,7 +475,6 @@ void __vm_area_free(struct vm_area_struct *vma) { vma_numab_state_free(vma); free_anon_vma_name(vma); - vma_lock_free(vma); kmem_cache_free(vm_area_cachep, vma); } @@ -522,7 +485,7 @@ static void vm_area_free_rcu_cb(struct rcu_head *head) vm_rcu); /* The vma should not be locked while being destroyed. */ - VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock->lock), vma); + VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock.lock), vma); __vm_area_free(vma); } #endif @@ -3188,11 +3151,9 @@ void __init proc_caches_init(void) sizeof(struct fs_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, NULL); - - vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_ACCOUNT); -#ifdef CONFIG_PER_VMA_LOCK - vma_lock_cachep = KMEM_CACHE(vma_lock, SLAB_PANIC|SLAB_ACCOUNT); -#endif + vm_area_cachep = KMEM_CACHE(vm_area_struct, + SLAB_HWCACHE_ALIGN|SLAB_NO_MERGE|SLAB_PANIC| + SLAB_ACCOUNT); mmap_init(); nsproxy_cache_init(); } diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index ae635eecbfa8..d19ce6fcab83 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -270,10 +270,10 @@ struct vm_area_struct { /* * Can only be written (using WRITE_ONCE()) while holding both: * - mmap_lock (in write mode) - * - vm_lock->lock (in write mode) + * - vm_lock.lock (in write mode) * Can be read reliably while holding one of: * - mmap_lock (in read or write mode) - * - vm_lock->lock (in read or write mode) + * - vm_lock.lock (in read or write mode) * Can be read unreliably (using READ_ONCE()) for pessimistic bailout * while holding nothing (except RCU to keep the VMA struct allocated). * @@ -282,7 +282,7 @@ struct vm_area_struct { * slowpath. */ unsigned int vm_lock_seq; - struct vma_lock *vm_lock; + struct vma_lock vm_lock; #endif /* @@ -459,17 +459,10 @@ static inline struct vm_area_struct *vma_next(struct vma_iterator *vmi) return mas_find(&vmi->mas, ULONG_MAX); } -static inline bool vma_lock_alloc(struct vm_area_struct *vma) +static inline void vma_lock_init(struct vm_area_struct *vma) { - vma->vm_lock = calloc(1, sizeof(struct vma_lock)); - - if (!vma->vm_lock) - return false; - - init_rwsem(&vma->vm_lock->lock); + init_rwsem(&vma->vm_lock.lock); vma->vm_lock_seq = UINT_MAX; - - return true; } static inline void vma_assert_write_locked(struct vm_area_struct *); @@ -492,6 +485,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); vma_mark_detached(vma, false); + vma_lock_init(vma); } static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) @@ -502,10 +496,6 @@ static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) return NULL; vma_init(vma, mm); - if (!vma_lock_alloc(vma)) { - free(vma); - return NULL; - } return vma; } @@ -518,10 +508,7 @@ static inline struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) return NULL; memcpy(new, orig, sizeof(*new)); - if (!vma_lock_alloc(new)) { - free(new); - return NULL; - } + vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); return new; @@ -691,14 +678,8 @@ static inline void mpol_put(struct mempolicy *) { } -static inline void vma_lock_free(struct vm_area_struct *vma) -{ - free(vma->vm_lock); -} - static inline void __vm_area_free(struct vm_area_struct *vma) { - vma_lock_free(vma); free(vma); } From patchwork Thu Dec 26 17:06:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13921384 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF246E77188 for ; Thu, 26 Dec 2024 17:07:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 859146B008A; Thu, 26 Dec 2024 12:07:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 808C76B008C; Thu, 26 Dec 2024 12:07:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6ADCC6B0092; Thu, 26 Dec 2024 12:07:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 472766B008A for ; Thu, 26 Dec 2024 12:07:23 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id DDDD31A042D for ; Thu, 26 Dec 2024 17:07:22 +0000 (UTC) X-FDA: 82937739192.19.B13433D Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf19.hostedemail.com (Postfix) with ESMTP id 7F84A1A0019 for ; Thu, 26 Dec 2024 17:06:37 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=BUpaHBq1; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of 3R41tZwYKCFoKMJ6F38GG8D6.4GEDAFMP-EECN24C.GJ8@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3R41tZwYKCFoKMJ6F38GG8D6.4GEDAFMP-EECN24C.GJ8@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735232822; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cAz4Nb0K6HeFkqWg5wvl8l9ILYUUia2eSNvAxmroFu8=; b=c4KQn255lWCxCLk7JJZbbehFFv6sYbJdu4FHWW8htx+vAbI5Hd7Ix0v3bd/aKjIVHGd9z6 67EBMVUxTdZpc73zb+OdxBKvGgw/KogGRcb3Tv1ovf5sh/usyO3cXgrp+l5qf3+2GvheZu 5sd4By8YYUc+DJL4TeJrFnX1MSa/H74= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735232822; a=rsa-sha256; cv=none; b=bceCp+v0oyt3YSm7m87/335/UVDK8R5caH1vp7ke3ZIrO8Q8RTndwIZv3lcyX6i5f9SIeq SD2owGeLZNma3GU2Bv1nAlFATE3nr1xfhE/golw0eF9cPAyQkpvdU+3Zv5k6scpNNjFxvh XnxhsUhcJXKIHpDQDnkXqjOpF6vFN2Y= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=BUpaHBq1; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of 3R41tZwYKCFoKMJ6F38GG8D6.4GEDAFMP-EECN24C.GJ8@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3R41tZwYKCFoKMJ6F38GG8D6.4GEDAFMP-EECN24C.GJ8@flex--surenb.bounces.google.com Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ef8c7ef51dso7828340a91.1 for ; Thu, 26 Dec 2024 09:07:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1735232840; x=1735837640; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cAz4Nb0K6HeFkqWg5wvl8l9ILYUUia2eSNvAxmroFu8=; b=BUpaHBq1IEN4YjofmDf9qCyWNFwrzYbyk79KrMuZuF+Ub5QMVxPWkGTL8lcD3cIpQD X6LdR73g0MnimGZ01/cTP6m9lvoy4iIAYoE3uxGRW5E2/cvpY07YyJ/+gVocRkmbBzVd /h1vYBBEi8uplCgJCExw9o5e1SIzKxW9bV3DFCFkV+Y6NudoVqdu0+7KlbTPOdn7MUiO 8sMGNg9NAL3gIOQQ0gdKG6lju3l4qiit98ZNVEskb5Y8GnnLW9IIbAopHLUk3I473tTQ FhpLmqQYyugr96V+LPQZeiDmu3aUHI2gbjnWMEsj0ezqDQ7mCBMI/9/t5M/PgZJXFchF YVqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735232840; x=1735837640; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cAz4Nb0K6HeFkqWg5wvl8l9ILYUUia2eSNvAxmroFu8=; b=MA7J7sBz9tUM2/cMwPdAzVUh0wyY4VA7yu0vAJ+zeYMS/J4+TDZNx+GrLr5qok0gG4 +6IoVG5nK8+neQDd+p1J6d94dTZNDPVKyQl5b6jLyMXJxjsE3KuHh7EWOxsM39eKxDnJ V1V4JHtKj0fqa6gOr7CfCe208di6cIuprJ35PjAxGNJOwjGRXL9XA1Yyj21+1Tzluit2 prXjolwMMPaTCd9ZTWDIYXP+ejLXLxbu90qLdWmppGE/EYR8IQBmSEiXKr3JJBHDu5Za lcERNRnqb+nA7SdXEgC3cjF68wsqrkmH/aXJoo1H1LA4+JFcASCTDdjxGUZyBvpHHEjy Wpuw== X-Forwarded-Encrypted: i=1; AJvYcCVvtN3njbxUdwsyKOH/qoG51SitgN6kTaMmOzd8Rw3EjGmXMgvaVNI1CquXKjL91bTXLJ5JVdreTg==@kvack.org X-Gm-Message-State: AOJu0YyuspzV3GinAf9YjCgffDDIrdmkl7258dlKNTH5XqtxUG1k1wlM NkPyU9fA5p8JxY2aPqu45rf3rjyxAGHBbhGJomrky8s/BjYS6vP4fB1BDwvSuwjME564WZJrLup sQQ== X-Google-Smtp-Source: AGHT+IHG75OaLxopGAtkxZlDDT2v+C2m79NALqlAe4xW7nBDhmw9thNrZbKpynlUfVT/naik0hkFrK8zsZY= X-Received: from pjbtb12.prod.google.com ([2002:a17:90b:53cc:b0:2ef:7483:e770]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:5251:b0:2f2:a90e:74ef with SMTP id 98e67ed59e1d1-2f44353f0b2mr41720800a91.1.1735232839810; Thu, 26 Dec 2024 09:07:19 -0800 (PST) Date: Thu, 26 Dec 2024 09:06:55 -0800 In-Reply-To: <20241226170710.1159679-1-surenb@google.com> Mime-Version: 1.0 References: <20241226170710.1159679-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241226170710.1159679-4-surenb@google.com> Subject: [PATCH v7 03/17] mm: mark vma as detached until it's added into vma tree From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Stat-Signature: omapu9kkdw8or63jon9w8tjcn8qdeepp X-Rspamd-Queue-Id: 7F84A1A0019 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1735232797-164809 X-HE-Meta: U2FsdGVkX18pFunXIIeJEMMaDr2Y09xPATjvftY0BMR44pBD0VDUdqRMNdIdZfISkJob0VNQnH0bV3JPLBx2A0Af3x/D3IVwWl0J1X7owIbHJnotvnSYza6amHXwyZm6gZPOULdivS9KRCoXz0lZig/sKQZkM/fG/E/yqgwWltWq4U0ygd3kppuVO/Bu9WYFkwsp1igNB1QxTkD0PUYqTUw4AolJ2vRpdkcobaFaqs8OZhkE+fYlDQD6U8CjNN28GqwwsQUqCMmBGM+LhqNRsuumT0Ow1xrKl5btSi4EtJ31VyjJ5OjtTtKfhyuwCUku0p3E9LnV3jN3CdYXytMEn1u0jATHCRY5t56DHXx1JkwnhARNhu/tyGmTtgfuOiOLNf32/Tz+sNYWOEfRG6qrI7D1AUHtOzMagcOxqjKD/Ta0Vab6L5wnJpdf/fEIfZet61icZo2nxK4+ZgKF+DlTbSzCUS954/olTRG3eIOjDKKOJ8W060mZJP7U1AR5YYz5y4/8bdxefJgbYmsFyRGh61qBPCTToGFh5W8SDhilu4F6lVl+x7eiqbrlykDvraV/mCjJM3npqS3wJ9SIESW2mCqD8YNFU+g7B95V5sZ0PzqY96Pz+xaa+3LoZ9jr0ziyG6h+fMbKOLLQXMlopLeYdS55qDvt8G5iw7v+VdrMemVAtuaH24e8pRdf34TtQKuFHSoxuCQ/CbawYLXcJ5ExFxUfs0xz/bnZhx1y0KoRsiyAK4JTMeM+wshw3BsMZARsoSqHQnn1CwhE5KAHrsgWVwuaJYavP0C/mRrfohJzt44ByghNlnnyhKdCpvluiwU3KsANal2a3a69SXI03L26aadgQg9NfFPwo2CWhgwDRN8iY16jdKMThKl08kTGFVudYr642YrGDInRMXV3I1/13gIlZ1SBJkJrAxODOjdIeiKmR+5FZnw/JnAcxrLnlBBR0/846AHDZ2uNSmOJUhR QJ6/1XNp vgFB7pc40vTsEGl20VAArlG1NnfTD/IsrDUA2K5aqWBh8Rutsppf+PMGrPH+jPrEkB7nvySRQST0caAO1Z+R/fN70H1pSGlILpm+UIE5ONrQngC2//qsa84yh2kGw/ARCl9PSjgq8wr2OrgGyQaEMJMgpWkdbCKcQgzpYK2OjB3GVDmJt8wiDY+i8rBL3DL+0+gRDOXKF2RyAgDtZDd4LzbRhYEf5ItIa6rqziNBTTgcuMB6VM07OM8O3VL/AF+T/wmqDlnqP9a/VeOaYhrUz3gQNPUWsz6CP/U+r05/T1z1YUSN10jbDIIJ2zZoc/d00jtH4Gz4zyMQLdaWvokTSZz96mI3SKhOjU19F27RahNAUo3vdI1uF42e+eGPBVRxH9N5NFegogIokZQDK5yZB/S/RkV/80q2bDgs0yyp200G1oLIul5fWRdG7m7/JssdvXyfge2PgLRWY2PkcsO9StYh8aqQA19TQni2vVNfFIL8+7Kf71bEXxu43zM2Lu5Q8XIAsxFzTUXc1LvON4fuY9GasHEF8x2LkkCFbTqlv3+jpdr6QuKbuPnxgU5KCagI+NhJl X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Current implementation does not set detached flag when a VMA is first allocated. This does not represent the real state of the VMA, which is detached until it is added into mm's VMA tree. Fix this by marking new VMAs as detached and resetting detached flag only after VMA is added into a tree. Introduce vma_mark_attached() to make the API more readable and to simplify possible future cleanup when vma->vm_mm might be used to indicate detached vma and vma_mark_attached() will need an additional mm parameter. Signed-off-by: Suren Baghdasaryan Reviewed-by: Shakeel Butt Reviewed-by: Lorenzo Stoakes Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 27 ++++++++++++++++++++------- kernel/fork.c | 4 ++++ mm/memory.c | 2 +- mm/vma.c | 6 +++--- mm/vma.h | 2 ++ tools/testing/vma/vma_internal.h | 17 ++++++++++++----- 6 files changed, 42 insertions(+), 16 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index f3f92ba8f5fe..081178b0eec4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -821,12 +821,21 @@ static inline void vma_assert_locked(struct vm_area_struct *vma) vma_assert_write_locked(vma); } -static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached) +static inline void vma_mark_attached(struct vm_area_struct *vma) +{ + vma->detached = false; +} + +static inline void vma_mark_detached(struct vm_area_struct *vma) { /* When detaching vma should be write-locked */ - if (detached) - vma_assert_write_locked(vma); - vma->detached = detached; + vma_assert_write_locked(vma); + vma->detached = true; +} + +static inline bool is_vma_detached(struct vm_area_struct *vma) +{ + return vma->detached; } static inline void release_fault_lock(struct vm_fault *vmf) @@ -857,8 +866,8 @@ static inline void vma_end_read(struct vm_area_struct *vma) {} static inline void vma_start_write(struct vm_area_struct *vma) {} static inline void vma_assert_write_locked(struct vm_area_struct *vma) { mmap_assert_write_locked(vma->vm_mm); } -static inline void vma_mark_detached(struct vm_area_struct *vma, - bool detached) {} +static inline void vma_mark_attached(struct vm_area_struct *vma) {} +static inline void vma_mark_detached(struct vm_area_struct *vma) {} static inline struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, unsigned long address) @@ -891,7 +900,10 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_mm = mm; vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); - vma_mark_detached(vma, false); +#ifdef CONFIG_PER_VMA_LOCK + /* vma is not locked, can't use vma_mark_detached() */ + vma->detached = true; +#endif vma_numab_state_init(vma); vma_lock_init(vma); } @@ -1086,6 +1098,7 @@ static inline int vma_iter_bulk_store(struct vma_iterator *vmi, if (unlikely(mas_is_err(&vmi->mas))) return -ENOMEM; + vma_mark_attached(vma); return 0; } diff --git a/kernel/fork.c b/kernel/fork.c index 40a8e615499f..f2f9e7b427ad 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -465,6 +465,10 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) data_race(memcpy(new, orig, sizeof(*new))); vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); +#ifdef CONFIG_PER_VMA_LOCK + /* vma is not locked, can't use vma_mark_detached() */ + new->detached = true; +#endif vma_numab_state_init(new); dup_anon_vma_name(orig, new); diff --git a/mm/memory.c b/mm/memory.c index 2a20e3810534..d0dee2282325 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6349,7 +6349,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, goto inval; /* Check if the VMA got isolated after we found it */ - if (vma->detached) { + if (is_vma_detached(vma)) { vma_end_read(vma); count_vm_vma_lock_event(VMA_LOCK_MISS); /* The area was replaced with another one */ diff --git a/mm/vma.c b/mm/vma.c index 0caaeea899a9..476146c25283 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -327,7 +327,7 @@ static void vma_complete(struct vma_prepare *vp, struct vma_iterator *vmi, if (vp->remove) { again: - vma_mark_detached(vp->remove, true); + vma_mark_detached(vp->remove); if (vp->file) { uprobe_munmap(vp->remove, vp->remove->vm_start, vp->remove->vm_end); @@ -1220,7 +1220,7 @@ static void reattach_vmas(struct ma_state *mas_detach) mas_set(mas_detach, 0); mas_for_each(mas_detach, vma, ULONG_MAX) - vma_mark_detached(vma, false); + vma_mark_attached(vma); __mt_destroy(mas_detach->tree); } @@ -1295,7 +1295,7 @@ static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms, if (error) goto munmap_gather_failed; - vma_mark_detached(next, true); + vma_mark_detached(next); nrpages = vma_pages(next); vms->nr_pages += nrpages; diff --git a/mm/vma.h b/mm/vma.h index 61ed044b6145..24636a2b0acf 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -157,6 +157,7 @@ static inline int vma_iter_store_gfp(struct vma_iterator *vmi, if (unlikely(mas_is_err(&vmi->mas))) return -ENOMEM; + vma_mark_attached(vma); return 0; } @@ -389,6 +390,7 @@ static inline void vma_iter_store(struct vma_iterator *vmi, __mas_set_range(&vmi->mas, vma->vm_start, vma->vm_end - 1); mas_store_prealloc(&vmi->mas, vma); + vma_mark_attached(vma); } static inline unsigned long vma_iter_addr(struct vma_iterator *vmi) diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index d19ce6fcab83..2a624f9304da 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -465,13 +465,17 @@ static inline void vma_lock_init(struct vm_area_struct *vma) vma->vm_lock_seq = UINT_MAX; } +static inline void vma_mark_attached(struct vm_area_struct *vma) +{ + vma->detached = false; +} + static inline void vma_assert_write_locked(struct vm_area_struct *); -static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached) +static inline void vma_mark_detached(struct vm_area_struct *vma) { /* When detaching vma should be write-locked */ - if (detached) - vma_assert_write_locked(vma); - vma->detached = detached; + vma_assert_write_locked(vma); + vma->detached = true; } extern const struct vm_operations_struct vma_dummy_vm_ops; @@ -484,7 +488,8 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_mm = mm; vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); - vma_mark_detached(vma, false); + /* vma is not locked, can't use vma_mark_detached() */ + vma->detached = true; vma_lock_init(vma); } @@ -510,6 +515,8 @@ static inline struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) memcpy(new, orig, sizeof(*new)); vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); + /* vma is not locked, can't use vma_mark_detached() */ + new->detached = true; return new; } From patchwork Thu Dec 26 17:06:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13921385 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 337FFE7718E for ; Thu, 26 Dec 2024 17:07:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7E6806B008C; Thu, 26 Dec 2024 12:07:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 748176B0092; Thu, 26 Dec 2024 12:07:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 60D946B0093; Thu, 26 Dec 2024 12:07:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 3B6A36B008C for ; Thu, 26 Dec 2024 12:07:25 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id E7E088112B for ; Thu, 26 Dec 2024 17:07:24 +0000 (UTC) X-FDA: 82937739990.22.DE44566 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf17.hostedemail.com (Postfix) with ESMTP id 2B3E240011 for ; Thu, 26 Dec 2024 17:06:50 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=uzIWEtaq; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf17.hostedemail.com: domain of 3SY1tZwYKCFwMOL8H5AIIAF8.6IGFCHOR-GGEP46E.ILA@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3SY1tZwYKCFwMOL8H5AIIAF8.6IGFCHOR-GGEP46E.ILA@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735232824; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uC/uS/GmF3UbaohKrad0ksO3lGJZa9/e0a1asiVVa0A=; b=CwX/xT6+j43vEwwzURawakDllVW/qwSFxSHFzJYSTRt4juvoKQibb8uSwtpUyISz6PW19d oDUekBfE7ihmWHGk5hj2l/B5vqPHiiSfpHm3j5yN6mhbFPEvB2LRP3kykYDhyyA4bjGYgA jE0BA0YKABkRucL13yoQRZfN6dLsjYE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735232824; a=rsa-sha256; cv=none; b=i1FqOj9k4mMQh+Rms2SEPJJh+Eqp+NAE+Ejz71WMHgUs5r/ttrA7GZFR9JobhScwCTp4sm zQ9RkGKd/zwNtn4kVxYWKp3dL4FGiKt3qHCZB434UmM6jG5AUBxw/w6idolr0X8wedvZbk Sequ0d8satNnqdMtdd/ZHDcsdQkeCTA= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=uzIWEtaq; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf17.hostedemail.com: domain of 3SY1tZwYKCFwMOL8H5AIIAF8.6IGFCHOR-GGEP46E.ILA@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3SY1tZwYKCFwMOL8H5AIIAF8.6IGFCHOR-GGEP46E.ILA@flex--surenb.bounces.google.com Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2eedd15c29eso8009505a91.3 for ; Thu, 26 Dec 2024 09:07:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1735232842; x=1735837642; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uC/uS/GmF3UbaohKrad0ksO3lGJZa9/e0a1asiVVa0A=; b=uzIWEtaqH+Xglai4E6j6IAq6UM6yd60zeK6LaIHWQ8aFYFrmWi40zzAIkKzGXXpMxW 70D7GpS7ZI8FEJxXZf6/7ALSJy3Nt2SOq4pFQ5m2zYlTUxcSF0ilfJRbmtaDUgHcZBZW SR3HaTSAJy3gsHQH8G31Y/nsRoR5UDHrJkh8eoO6WtEazm//NDq2rSgqT7GNHxVf1HrC c8R3tLSgd5yDkMNiA+s6YlyKSs1CGo8rmWrZKsSSCf71saHKSJjbcgZkCaSLex7Hqk6v l/Dt76YEUuQYllf7KzsoY/WXnQPAxBhtwjF5oPsNUHbsKMtdISAyjH201kL4fPhe7vJs AInw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735232842; x=1735837642; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uC/uS/GmF3UbaohKrad0ksO3lGJZa9/e0a1asiVVa0A=; b=tKJju2oGISTnXuSF/pO/iPhkIZXJ5FmD06DrbtfsfkPkU1AmPj0s7TcfcQPC9zWNer 7IWct87+KZbcho32ZBc68yCDvrLBmUV2WSWCgYSKVdEtg6/V7sm0AI/iwEWqSSxwuzAX SNxivPIxD89lGxNPZVGRhwuFQ9CeO3MgMoLP+EazNyWQHpCM4Cvp64VakVBsUcw3ZNmF lDSjeki+ntfDwtUFnQbQqYtuTcYbF/DX6eK4Xkrqy2XD7xbu7bU234KZs9wZUpD1Xpiu 4/CCgECESic8i7V99wmSdlTLHOT6TkurUpRbL0sSgY3nySBWt0H0dAkQ+83MPIZ2gKzN tW8Q== X-Forwarded-Encrypted: i=1; AJvYcCWxWqnyzNXd6xVdM8g2ybIQOtwf1j9AIoGpLCH9V0U1IHiU8y3pzD4IMQ3L3hD6lZy54TtPbSZvJw==@kvack.org X-Gm-Message-State: AOJu0Ywgfvd2l8+imiRIH77RxxgEah5gu/NrbV0X1yGkM81i5VelkhJ4 zb2wrAX0r4hNrnY5BMZ7QGYWFt6oI9TJOKhBjSgJK6CQzcGMr0zuY90myXZaCupKQY2chtt9/7y LDA== X-Google-Smtp-Source: AGHT+IHCdGYEXz7DM0Oi4jA5FcxiPuZb/VK8PE48RzKkuHUHkyLtSTzBzCK0XU3Squr5UgqXBJEcCRqnrjo= X-Received: from pjyr4.prod.google.com ([2002:a17:90a:e184:b0:2ea:3a1b:f493]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90a:fc4f:b0:2ea:5054:6c44 with SMTP id 98e67ed59e1d1-2f452eeb641mr29738883a91.31.1735232841920; Thu, 26 Dec 2024 09:07:21 -0800 (PST) Date: Thu, 26 Dec 2024 09:06:56 -0800 In-Reply-To: <20241226170710.1159679-1-surenb@google.com> Mime-Version: 1.0 References: <20241226170710.1159679-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241226170710.1159679-5-surenb@google.com> Subject: [PATCH v7 04/17] mm: modify vma_iter_store{_gfp} to indicate if it's storing a new vma From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Stat-Signature: qre35mtkououyotkab3869rtfww1xrek X-Rspamd-Queue-Id: 2B3E240011 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1735232810-284135 X-HE-Meta: U2FsdGVkX18RDBrgGrooBb+K0ZME55DOuCV+Wl0RlrUuiTBaeUtuccFODF4PJjCdWIjd2LmFeV9d1clM97HUoprUz6SANm7RqAlk5m0+/YA7hOpJXorOva1ySOcfwCijZRieDYXGJt4kccSi3WVHoS/YKJSpXnMwSmCQ0I0okDsMlXZn8aX4v5EFnzZh1mX90mTVcLb4u6of37yvCY++lJiZ78VitH8Xt8HA5AQbK+3wA2lZn12mMoms8c0u7E3M99MNwh9gYzA1ubG/Qxh8hvYrPs9Gv5QOwIToJLZiNTy6KS2rqZQEwuUo0ioXc4jymTa9FaqJmasVHH8+rk25CyYGf2G3cV7PyFdnuhJFfyWf2h/ArKIFSfkyqYWHsY/VUIZN/K6XUGwNVFzWePbyE0VjIQuhtd50GVgMxADsxhqqDUNQB4Qg79EtxbPeNVuNJtCCnFSDS/p7cITBLnzh6b6jAvaVJ17kwlFK3taKKKbAVK1Engm+NV12OgiXUUKIjuK+P5Es9Tc93d+NsPbz3V4PTj+ohd3h923nnifNETEwGmPTENJ7ooNnCtc7jXgUkJ1UfUfkZJGmi1ATVGU85Z3LGoQ7SmDz3UisoAvKVTK4dbx+GwIQtOdasne5IGX34HjOrZApdFp8pPaHq3/+MJzB5u7QyIxG+tyIJYPJYWpRyKLdfeuRlqynRpZpnJTmjt3K/E8EoSJMey4xXh11KGOgoRqhzWjkDiACiuCgkLHcJgqGbZkKO8w8n44q/ba1SAsmRnlBY1ypCoa+lTPLjyhcFT9/FObGGDUa5qHB82reILactIak4g4nDE3gxTH7JGVNKYfmadiD6Yt0k6OA7VeQz+FN7oy0lMZlQw7GZcf5ap0gR5KH1yD5SztSykxwb3/hBg5DSqQZSBfghCiLI1TCtVHFAfPqeCg6YEJorBSIfTcXuecmafnc+8FgEx4gmG7JdsK5fTbRxM6J8um gbpvtLWS GEWEGnK8L29ueNXnLei1arpSgeWqvJxKZ7Vkf3enV+jgTYHi2A72XVuY6WPQeGFFRRmfKtjudOrZRaSkgUA/5pYFoyCcZBuo9oQZlQ4EOrJDBbvsSJ1gQuSHQYW19+ed7jvbu6R4Mf2cIb3+MPJJjl6METSXAcAV16Zlmm0sF/PnUHCOSZRcD0o8P6bHK/UcA4bftdCjZ0qZWp46F1HmdyFIj1tKd7OhUWGuH8ygVryTs4T8bAFBK9/gQVQYVO+/hy4dhCiU+PW5ZbEF/28rpr9NoVytdBTRs1m06ySd2plBCYK6BjI8BmyQG0HJxPZe8rFk5KCIZGkcrMUnvQZ9Wz8w9L7C3TR9JvKmdjP+Gazp3qkSdPgWF6120dG866Ot0iGxN3qPj5FkJ1PYj5MqjQfvCDH55UcDjeBybVCOsbn0roflKv/jwVWUFU7NKOsrvMg87BSa4dkWtBbZ4ygI0zTHpQ6LwDTsswNWHceHigL9XPjdAVlRpTqAARmHNj0leBR7PE2AkPEnamhSgcIWUGNp4UG9pMJ2TAJZ5FGnU4laQrEYe5m1bxaikJPEk1eamXILcxR1O4v+yIho= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: vma_iter_store() functions can be used both when adding a new vma and when updating an existing one. However for existing ones we do not need to mark them attached as they are already marked that way. Add a parameter to distinguish the usage and skip vma_mark_attached() when not needed. Signed-off-by: Suren Baghdasaryan --- include/linux/mm.h | 12 ++++++++++++ mm/nommu.c | 4 ++-- mm/vma.c | 16 ++++++++-------- mm/vma.h | 13 +++++++++---- 4 files changed, 31 insertions(+), 14 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 081178b0eec4..c50edfedd99d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -821,6 +821,16 @@ static inline void vma_assert_locked(struct vm_area_struct *vma) vma_assert_write_locked(vma); } +static inline void vma_assert_attached(struct vm_area_struct *vma) +{ + VM_BUG_ON_VMA(vma->detached, vma); +} + +static inline void vma_assert_detached(struct vm_area_struct *vma) +{ + VM_BUG_ON_VMA(!vma->detached, vma); +} + static inline void vma_mark_attached(struct vm_area_struct *vma) { vma->detached = false; @@ -866,6 +876,8 @@ static inline void vma_end_read(struct vm_area_struct *vma) {} static inline void vma_start_write(struct vm_area_struct *vma) {} static inline void vma_assert_write_locked(struct vm_area_struct *vma) { mmap_assert_write_locked(vma->vm_mm); } +static inline void vma_assert_attached(struct vm_area_struct *vma) {} +static inline void vma_assert_detached(struct vm_area_struct *vma) {} static inline void vma_mark_attached(struct vm_area_struct *vma) {} static inline void vma_mark_detached(struct vm_area_struct *vma) {} diff --git a/mm/nommu.c b/mm/nommu.c index 9cb6e99215e2..72c8c505836c 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -1191,7 +1191,7 @@ unsigned long do_mmap(struct file *file, setup_vma_to_mm(vma, current->mm); current->mm->map_count++; /* add the VMA to the tree */ - vma_iter_store(&vmi, vma); + vma_iter_store(&vmi, vma, true); /* we flush the region from the icache only when the first executable * mapping of it is made */ @@ -1356,7 +1356,7 @@ static int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma, setup_vma_to_mm(vma, mm); setup_vma_to_mm(new, mm); - vma_iter_store(vmi, new); + vma_iter_store(vmi, new, true); mm->map_count++; return 0; diff --git a/mm/vma.c b/mm/vma.c index 476146c25283..ce113dd8c471 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -306,7 +306,7 @@ static void vma_complete(struct vma_prepare *vp, struct vma_iterator *vmi, * us to insert it before dropping the locks * (it may either follow vma or precede it). */ - vma_iter_store(vmi, vp->insert); + vma_iter_store(vmi, vp->insert, true); mm->map_count++; } @@ -660,14 +660,14 @@ static int commit_merge(struct vma_merge_struct *vmg, vma_set_range(vmg->vma, vmg->start, vmg->end, vmg->pgoff); if (expanded) - vma_iter_store(vmg->vmi, vmg->vma); + vma_iter_store(vmg->vmi, vmg->vma, false); if (adj_start) { adjust->vm_start += adj_start; adjust->vm_pgoff += PHYS_PFN(adj_start); if (adj_start < 0) { WARN_ON(expanded); - vma_iter_store(vmg->vmi, adjust); + vma_iter_store(vmg->vmi, adjust, false); } } @@ -1689,7 +1689,7 @@ int vma_link(struct mm_struct *mm, struct vm_area_struct *vma) return -ENOMEM; vma_start_write(vma); - vma_iter_store(&vmi, vma); + vma_iter_store(&vmi, vma, true); vma_link_file(vma); mm->map_count++; validate_mm(mm); @@ -2368,7 +2368,7 @@ static int __mmap_new_vma(struct mmap_state *map, struct vm_area_struct **vmap) /* Lock the VMA since it is modified after insertion into VMA tree */ vma_start_write(vma); - vma_iter_store(vmi, vma); + vma_iter_store(vmi, vma, true); map->mm->map_count++; vma_link_file(vma); @@ -2542,7 +2542,7 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma, vm_flags_init(vma, flags); vma->vm_page_prot = vm_get_page_prot(flags); vma_start_write(vma); - if (vma_iter_store_gfp(vmi, vma, GFP_KERNEL)) + if (vma_iter_store_gfp(vmi, vma, GFP_KERNEL, true)) goto mas_store_fail; mm->map_count++; @@ -2785,7 +2785,7 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address) anon_vma_interval_tree_pre_update_vma(vma); vma->vm_end = address; /* Overwrite old entry in mtree. */ - vma_iter_store(&vmi, vma); + vma_iter_store(&vmi, vma, false); anon_vma_interval_tree_post_update_vma(vma); perf_event_mmap(vma); @@ -2865,7 +2865,7 @@ int expand_downwards(struct vm_area_struct *vma, unsigned long address) vma->vm_start = address; vma->vm_pgoff -= grow; /* Overwrite old entry in mtree. */ - vma_iter_store(&vmi, vma); + vma_iter_store(&vmi, vma, false); anon_vma_interval_tree_post_update_vma(vma); perf_event_mmap(vma); diff --git a/mm/vma.h b/mm/vma.h index 24636a2b0acf..18c9e49b1eae 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -145,7 +145,7 @@ __must_check int vma_shrink(struct vma_iterator *vmi, unsigned long start, unsigned long end, pgoff_t pgoff); static inline int vma_iter_store_gfp(struct vma_iterator *vmi, - struct vm_area_struct *vma, gfp_t gfp) + struct vm_area_struct *vma, gfp_t gfp, bool new_vma) { if (vmi->mas.status != ma_start && @@ -157,7 +157,10 @@ static inline int vma_iter_store_gfp(struct vma_iterator *vmi, if (unlikely(mas_is_err(&vmi->mas))) return -ENOMEM; - vma_mark_attached(vma); + if (new_vma) + vma_mark_attached(vma); + vma_assert_attached(vma); + return 0; } @@ -366,7 +369,7 @@ static inline struct vm_area_struct *vma_iter_load(struct vma_iterator *vmi) /* Store a VMA with preallocated memory */ static inline void vma_iter_store(struct vma_iterator *vmi, - struct vm_area_struct *vma) + struct vm_area_struct *vma, bool new_vma) { #if defined(CONFIG_DEBUG_VM_MAPLE_TREE) @@ -390,7 +393,9 @@ static inline void vma_iter_store(struct vma_iterator *vmi, __mas_set_range(&vmi->mas, vma->vm_start, vma->vm_end - 1); mas_store_prealloc(&vmi->mas, vma); - vma_mark_attached(vma); + if (new_vma) + vma_mark_attached(vma); + vma_assert_attached(vma); } static inline unsigned long vma_iter_addr(struct vma_iterator *vmi) From patchwork Thu Dec 26 17:06:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13921386 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2B72E7718E for ; Thu, 26 Dec 2024 17:07:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BDE626B0092; Thu, 26 Dec 2024 12:07:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B67266B0095; Thu, 26 Dec 2024 12:07:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A06426B0096; Thu, 26 Dec 2024 12:07:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 79C476B0092 for ; Thu, 26 Dec 2024 12:07:27 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 119FDADBFC for ; Thu, 26 Dec 2024 17:07:27 +0000 (UTC) X-FDA: 82937739612.27.CCF2BCB Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf03.hostedemail.com (Postfix) with ESMTP id 3C66C20008 for ; Thu, 26 Dec 2024 17:07:06 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=S1J0PoKc; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf03.hostedemail.com: domain of 3S41tZwYKCF4OQNAJ7CKKCHA.8KIHEJQT-IIGR68G.KNC@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3S41tZwYKCF4OQNAJ7CKKCHA.8KIHEJQT-IIGR68G.KNC@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735232806; a=rsa-sha256; cv=none; b=kzTVQgFI3farfrSZ6dl9+3xuhaKf2SiGN72LalmWpaPR9a5IU3GUcjDqNqVnYGidyj2TEe 7vJMtV804SjI30wFSN+7sgefdu8LzvOsGiF1bk3sHktWUnwS5lWITvDTDqmKHfcLVH6po+ rr/nl4QKrzTqYhFJtIf7WIQSad66NB0= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=S1J0PoKc; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf03.hostedemail.com: domain of 3S41tZwYKCF4OQNAJ7CKKCHA.8KIHEJQT-IIGR68G.KNC@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3S41tZwYKCF4OQNAJ7CKKCHA.8KIHEJQT-IIGR68G.KNC@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735232806; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zZzt97suQHoGz3iLHkiBfewFYx04NIJnrpXSSuwtyuU=; b=bUR7htNZWUJNvcJTdu+0Ok6N3Y2kHk9tWw/ZhcNzF9lFYwIt5PNphVka+bFjfRPyZJKEVM BLtoyvAJXdyBGIpQggp51oXmBnzL9BmaCVNTnDx8CTT2qyBxDuXgSfjLCaH7Pndiq5+96I +5V13/XwFas/ApetoJROgOFaXWYJA/w= Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2167141e00eso81174075ad.2 for ; Thu, 26 Dec 2024 09:07:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1735232844; x=1735837644; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zZzt97suQHoGz3iLHkiBfewFYx04NIJnrpXSSuwtyuU=; b=S1J0PoKc9peP5Xkw2CbUbMeWLjt0zZTge774cNnnSmrc2e42+Br3tFhz8bvudY6T06 jT/6qiPmvULdw6lEMc4rUC5bSktKv21erqjYbwDZ8Q1PIjWoOidHMwTOCmeLqAjgtFgq y7XCfuz7iSNGjZ8ACe5PaRsaH+VtcMgXoRevrvJpNDVnwqNqhWFSaOkwE+nQ+2UjHy2Y eErQcyk3c+GWYn6XVXshFxFQyTu1lMn3zj16Rd/PpJWY+4c0fkzGkh3TC0H9O2GpoNNy +fIDjqI7Ju7vduekKDryX7DSgwVsfE1t0T8f27bRiEPHHDMJay5mjGa1uRSJHrZkKkAD OVSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735232844; x=1735837644; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zZzt97suQHoGz3iLHkiBfewFYx04NIJnrpXSSuwtyuU=; b=TbW4/SYqLv3BWVrlvYO+4++giOJGOnQaCQB3R5pT9WyaxSZP4FZP/Wf2xoawkz7H+w Y4eWjrwQT+xh0BlCUdaREvq04l/LG7UYhr3K3tja8LlhZ3sbJjdZ3nqn4GeHDT1fJ4f3 4Ql4N62pjao+JYwHlfQXkLOO1y7gd7kPqXbgLfRAc+LzWdj2VPEQEU8kNYIhARQCtUv+ iXyURurQjLGsYE9S3x6R/+XFAE0gKNKmq68MCuSfvCYc+XaIdpGfWcqLzFwkmd55rE03 EQkFrVlKCE1CajFTPCFdsBMk8JnfXUUOYJG1CjhH7I8E9J9JTAD6Q1YBUZt+z+nLF+R+ Crug== X-Forwarded-Encrypted: i=1; AJvYcCVbPcutGm3SYAWcSGMBk0E/V2qdKlR23VJrvxHT5GWz8DI7rRueguAVzWB1o1/AduS1DAQi3fWeQA==@kvack.org X-Gm-Message-State: AOJu0YxEzFpy7c7/76ny2ozSBkxWFpoiXaOuM+x1Q1CyK1dysecVlbEJ CwIgncC+JZWsFSsemaN5aC4o1IIXRJj+7HfQJQ0kcpq710yAMHBrfn1NS5UdBfZFTwRelDGGD9g /KQ== X-Google-Smtp-Source: AGHT+IEAlsKyIxUvD+OjguaAw8zgyVzbZEnVmDfknkzudP9YDfj4nmndYPZJH96mZmFPgzBT4t688/RysUg= X-Received: from plbkn3.prod.google.com ([2002:a17:903:783:b0:216:5441:d855]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:f54e:b0:215:e98c:c5b5 with SMTP id d9443c01a7336-219e6e8c5c5mr356653565ad.1.1735232843904; Thu, 26 Dec 2024 09:07:23 -0800 (PST) Date: Thu, 26 Dec 2024 09:06:57 -0800 In-Reply-To: <20241226170710.1159679-1-surenb@google.com> Mime-Version: 1.0 References: <20241226170710.1159679-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241226170710.1159679-6-surenb@google.com> Subject: [PATCH v7 05/17] mm: mark vmas detached upon exit From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Stat-Signature: 74r1df9tgrexbjnnbbwegowr1sjsbnfc X-Rspam-User: X-Rspamd-Queue-Id: 3C66C20008 X-Rspamd-Server: rspam08 X-HE-Tag: 1735232826-226914 X-HE-Meta: U2FsdGVkX1/p4jH/tX1OJ9v4Kho4R27+2DxMBOD97VGjOH0FDLuyaE1cERnlLWj1qh+pqpEqQaN0xoAmPQzivJwpxnzl4lKAUVgi4RuvKN/opL99ZRDyHliGVPRKWxVMXqIOR/BCTYRORO7OZIE+AcdtnXAnX7+JBlQ10nFv/EXkwYJNgI2nAbPpnmVwF6hdQHHBwDo1gB+2luc23u3FfZT8PuLZXd+h/q0i+SbNMFRuP6OWTHKJhj9+sPCLpSipBaIK7yz447sXublsi2S6x5/PRxPyRoX5Ww2UPbKjk1t4+Z2eKOVwCK1NPU335dn0kx5I6/HpebnzHLudcJWY5z2Tzc+e6WdOTKCva4TFYUMZ/h1MEhswC4tAHMqQm2ipfWNVS9oh7SNyjAcwD3Ph633XV4q2tvqpDFzmKXGG3eBqO8pUy3FIjKDuqgXYTfydUWYT/1QrEGq7TLlPnRcZua/T/9bS84/4cYvvaV0y/Sjf8T3GTSwKAyFM/Ny9qhUCeyQ6H2O3sKW76GQYYK5WLOf+OQgsQlKnMi4wwpl0Eg5xmweyeraVuJClys4NG8JyQY28O0P9Wsl3gIXG++bTA2ZotzC6DiSOlh4U1R7hoXXIYXIJPidRNYkl9XZKFpNHDRj6Mk1QQyoTMUawZSuCABDDic2hqG9BxU/pU0xXHbLfsnaVAhv3Zxh29+aSK11WfUMEs5DhrGthJp3/M1HmSMCjKfEJitGxzqwn/nOZFEWb05ItWD5fTHQwyv7y68YikAgZoEzWS6rD76SZHULwHoCnZKqNwRpGbuUMcJSxsbqoU2qFiP0jb2Hpd7bNIj+BuGGI+EM41kB4y7C85B0r7+b+v3r8IQ5Z/5N3gZ9pblOQcOOnf6VQnlhlPMRaYWybhq0xsgfOhL/d6kaxxDYVI/Fwug9GiQ3Bl3jY/Q1gmLlRSaM0RQP2nGHPZPm6uGDaxyPLjBsH1Rb93LurRFi 37QrCx/v fTXRiJvSn5VSV9N6rrFvb5dbTgxoyduJQSNN4XUOIae5SM6JvC02qfdJfNXf8mHpt5rwyiWaULXyqOp3tfZeDEQlyxJUSHpQ+pvMWzGtCKXUcVVkvIG/Hx4kmbMFGL9dQJqCgMWgPZ78yovhQnBRQMSOBanQY+mIm2KEY4/8Q3lXz5AO2cxo4/hvRoSfGSank8Jw4Z76c9YHiaTYCq5b42zWv6WmZPtsAjcip353jqaEtAoe0vEMYz+o1RJwEjo7WgLi4xkIX3KdlPHQ3hM1a45b9QjeOhQHOaj57RsokieBNoTj5BqiTalBuSJqLjNQ0V5HF2hlsReC/LxQn4fMoRI4nGGarHP1GC35njthjv5IZbH5uT5mNC/ReDaqT8cHAMqeAAExjNxAoaZQoIp207qhvO42zEa6Tp/S4Q7f2rVRuO/jqSIKO74EdggYN6s4qTmRJi8vs4gK/1ZSdWyRP/E1mfLTFp9RHd8J3XEnGQXXQ9RT4zClDahQ9kTc7j9eJWsmKm/oAOky7p94Y1fE5Ewyk1l4eQ5/RiwQb9gQvDoFapZ8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When exit_mmap() removes vmas belonging to an exiting task, it does not mark them as detached since they can't be reached by other tasks and they will be freed shortly. Once we introduce vma reuse, all vmas will have to be in detached state before they are freed to ensure vma when reused is in a consistent state. Add missing vma_mark_detached() before freeing the vma. Signed-off-by: Suren Baghdasaryan --- mm/vma.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/mm/vma.c b/mm/vma.c index ce113dd8c471..4a3deb6f9662 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -413,9 +413,10 @@ void remove_vma(struct vm_area_struct *vma, bool unreachable) if (vma->vm_file) fput(vma->vm_file); mpol_put(vma_policy(vma)); - if (unreachable) + if (unreachable) { + vma_mark_detached(vma); __vm_area_free(vma); - else + } else vm_area_free(vma); } From patchwork Thu Dec 26 17:06:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13921387 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DD70E7718F for ; Thu, 26 Dec 2024 17:07:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A7CF06B0096; Thu, 26 Dec 2024 12:07:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A2D436B0098; Thu, 26 Dec 2024 12:07:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 881C96B0099; Thu, 26 Dec 2024 12:07:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 5AD346B0096 for ; Thu, 26 Dec 2024 12:07:29 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 0C296ADBFC for ; Thu, 26 Dec 2024 17:07:29 +0000 (UTC) X-FDA: 82937740158.22.BD4A764 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf16.hostedemail.com (Postfix) with ESMTP id 8A3A0180011 for ; Thu, 26 Dec 2024 17:06:44 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=WWafnpsa; spf=pass (imf16.hostedemail.com: domain of 3TY1tZwYKCGAQSPCL9EMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3TY1tZwYKCGAQSPCL9EMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735232828; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=g5ywt//znwFKMBkiU95FpNYi4gj0LeYvCae6lfTb4Ek=; b=MhgZSpgDfMTFuJZw/6aAz8IpzmPTh/6Rt7aOytpwaNY9vHdQgw8YVXfoOJzaMbPmJ/WYTS 8t2O33FdTReealzn3crjH0iOXnvhEbUzeK7PyQeW5f5I7wiNuZveW8Ha8Zxy0Lz9FovUdJ y+wx4C64nraVKJyxk7/awRvLPYRWbvs= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=WWafnpsa; spf=pass (imf16.hostedemail.com: domain of 3TY1tZwYKCGAQSPCL9EMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3TY1tZwYKCGAQSPCL9EMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735232828; a=rsa-sha256; cv=none; b=P0x+3SMnpd4hCWykjUtvNpWOw62ixSx+/wQGVUd4jpHo65wBCS1t0IeV89pQdETYQuZKu+ f1Oj0wASHs99vFYqfNYUz2mJTEJBB6H0xBp6kPhyt4JHLAzIRmbVKzUSNG3FtA0jpA7r99 CgcnTapmkbswzw63gSRDLbLdMMYgLNA= Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-21640607349so115048225ad.0 for ; Thu, 26 Dec 2024 09:07:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1735232846; x=1735837646; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=g5ywt//znwFKMBkiU95FpNYi4gj0LeYvCae6lfTb4Ek=; b=WWafnpsaSd+0UqhmtOtrPvyAafO2U5Lhpx0kqqu0kvNeTxYrpyG6B2+nXf6wPMr6WC /wU+MrE9S7b8OfozlgxwXrbCZMx4wY4u3Y/o3tlnHELzrF8ZCVOPLQPUqtMDqj8xq1Z2 gZo8Sptb/27cliexLL6Xei3oUQblo91G6I54C0YkIYb8Y0+jpnLZN/xAQMxd/ux27R/J Y/2g0aYcfD8hTJjBbPZeXtIdVEMRS8fL82Wx/xIqwmcn3WVP9iOXhu3IK9mqDzZo9C6R fkJ+zoCpyC7Xvk7Ehti8R3B+EXECPaDUEOkPmEuhIWH/svIIs2/A6VHZznwlYKxZMUyf WpHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735232846; x=1735837646; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=g5ywt//znwFKMBkiU95FpNYi4gj0LeYvCae6lfTb4Ek=; b=sdqEqyjBV0DCcJNMrMA8Z5nlbzSsQAxvQiva7ZdJAe8D0us/bXCFSxWiQ+xRlqvBM0 ZhJN1xxKPTXhhoAdGTsvGTAeZPUXAxz8Hd8XXsTWI50/7ZE1ftuqj1OA8ruDvs2Sk+Yy P06r6bFZboFj2o+cbOHmoFRndcAcDtjMogdFZupvSj6AGbz6mSmx91MWJbknwpZ9B2GV NLyVFhpatvH2GqemFD5xxpPrT5xlo2vmVFDVQ7jJ/RpjmHoyoCojyByrp1o0HdGfNyRS fU8G26axDSsz/wZUGZoBsXCJlKWQ3EHBdEsNHaRDTJvptXOrqhFv5L3eFgsCMqTZgF5l E0jw== X-Forwarded-Encrypted: i=1; AJvYcCWli3xu5jsvQOkJbMIyxCsMpwwcHdeqbrJvszKMIU9/SN98ADdi0TK5lOSdA0o5eIqkcd0WeWHTNw==@kvack.org X-Gm-Message-State: AOJu0Ywlmoe0Bw9Yt47k2vbDUSubWOwnlqMDbo19T02j83SmWtM0SGds jkao1/hucBaSm7j3OxFntQRjQO1IxjddClhNNsOXayUklIN0qgrsFI/FMsLSt0Ta6MVC5PagK/4 oDQ== X-Google-Smtp-Source: AGHT+IE/h3it9ewL3ohy9Py+t9rkxMgblJxIK9ZNryUZhJOqjESQSzvwpTa/5zazFY30vveAdTUiPRQAGfg= X-Received: from pgjs19.prod.google.com ([2002:a63:f053:0:b0:800:502a:791c]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:d589:b0:216:45b9:439b with SMTP id d9443c01a7336-219e6f28486mr336700685ad.50.1735232845994; Thu, 26 Dec 2024 09:07:25 -0800 (PST) Date: Thu, 26 Dec 2024 09:06:58 -0800 In-Reply-To: <20241226170710.1159679-1-surenb@google.com> Mime-Version: 1.0 References: <20241226170710.1159679-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241226170710.1159679-7-surenb@google.com> Subject: [PATCH v7 06/17] mm/nommu: fix the last places where vma is not locked before being attached From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Queue-Id: 8A3A0180011 X-Stat-Signature: fsmz1wq7npkcpg9rcn4pog86orj76n9b X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1735232804-202278 X-HE-Meta: U2FsdGVkX19qgljM6rurDO6In5ErCK7prOI1a7vu7uJIB3OajSZkSCxE3bLP3R3qLkBMG2A6OWuwWn6+ttCx2sGog5MbNBblCng5pEe4qbYG/yiKYeySR0wY3t9omnJ6KReVJygCnDoRFikqYfVStT9b+rl8mr3fLDk/5JxRngrq+muFNQNMszSG49eDRR3Q0VSyl36nxu6CXi6htYVQxATazx6Ux4/mDxBeYkdq3W6HL8rUY79M1IhvEvgdqbLQww9uzsrwfWQokHBJHsHmfLWDPYe+osY7hcnPGwiNUfv4MaizaDvgr/uKAqrF5NlWGkJ3b1sGvqerI+cqYpOjOIlVp3NZQLFagjy1S4oQZtYN/EfLVzeXaZi5zkiQXr0VBKDKZsCEtNlBI09jbwAuhho3EPp7IrVinyho7/2Uq2L+w497Hql2h1pOqzdFf3yCzUBlU78hIT6x/YpwqHgmlrWFsbqwQgLPa9lHQdEZ6WJwvbHjGUi6m1XKl3UIfztOANINsxbvxoGoaSHBZsh/IPvKH+uGoa1x/4sCcn+Y1MiLqa5fcA8X6hywIgUrNr9ijZlps/sAxKtXOByxC3t5vemWUoqCSu6RDON6VLebrSJaHTiOp6HeHCTYpZhDKB6zgCdNWlWic4a++teD7JtEmtV1jnOCJFUNWja0k99wfWaGYcfLrkLiPukLha3Jy9yI6Cu7Y293KojY995karOszJhN+2uht+d/Wi/IblNkqtUFO4J2UMrjo4CcTV8c8x7crqjQWOGqnPzHn+QjV3gUAhVk+d9tQF6cECqB6ay1PNmbTCuHH7VqRdupGQRkjSpXPcrg6WdynhCNzevfPCfT0y87A4aLWmyVl0W/rAIXByEsoaF0TbgNVB0TI2bO/mJ3m6GuckFQQKznQM7dEJZvBceZ2ySkJoIYGKqL9WiYJ8AL1pPtngr2szmQmBIjPBxnM+JStTRokYK9A1+9c+s THTaqdRL BP6yYL0Qv2dYD70waRoa/ZMeusPfFJ9x8EYAsssRQNM7Awj0l6iGuACZKoTt9Jj6UezEAHHOCF2mTOSGrGB03vwkdHDI7uG1LXlJvaHmYlH+FX+z93cY4QLOSH75CY6VB7CvVXpg6Hh/IhZbxXiWpptXwsEmz6Q2rrGAYRQXM9nCnrrwTGkvvq+OmyIpcmVy32jec+slpYtiG0VLn0h2yVVv0+sGZtpSBRncwPrEwrg6kG+5oDDJNYixTNVRs+Sqn4lVJHZT+nMGivUbimqX+J0A8UEkAsPlKBiUxymCm2F1zT+ikpzd4TYpG/au6dT9V+HV1hvBQXUvH5kZTptO+f+W8z63ynRQWr5sqtprt5MnBWY8uEcNOF1aP2ozKerXco2IAITDhVHm95CZkTQ9cOelPLhGUN9mEpgVB3HWFpStyifVbsTbaPdupZwFOKndeCwuLyPnMSikqxW9blL/zWbKIRd6FbGfCuWAglBvczGljSTdnj1HSx9McC510kwDHVpatEVwiq3GVB4QNOrvdsogB7lBbGgr7aDyDiVt8nrNom0ER02QcWzZsFzxPqSBY4/qK269aRhrlYNVz/YQhYHVXKsGU7pSnZUb+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.007109, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: nommu configuration has two places where vma gets attached to the vma tree without write-locking it. Add the missing locks to ensure vma is always locked before it's attached. Signed-off-by: Suren Baghdasaryan --- mm/nommu.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/nommu.c b/mm/nommu.c index 72c8c505836c..1754e84e5758 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -1189,6 +1189,7 @@ unsigned long do_mmap(struct file *file, goto error_just_free; setup_vma_to_mm(vma, current->mm); + vma_start_write(vma); current->mm->map_count++; /* add the VMA to the tree */ vma_iter_store(&vmi, vma, true); @@ -1356,6 +1357,7 @@ static int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma, setup_vma_to_mm(vma, mm); setup_vma_to_mm(new, mm); + vma_start_write(new); vma_iter_store(vmi, new, true); mm->map_count++; return 0; From patchwork Thu Dec 26 17:06:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13921388 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8072CE77188 for ; Thu, 26 Dec 2024 17:07:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B67336B0099; Thu, 26 Dec 2024 12:07:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AF1576B009A; Thu, 26 Dec 2024 12:07:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 991D76B009B; Thu, 26 Dec 2024 12:07:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 7374F6B0099 for ; Thu, 26 Dec 2024 12:07:31 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 27C2DC0302 for ; Thu, 26 Dec 2024 17:07:31 +0000 (UTC) X-FDA: 82937739528.08.8B1D6BB Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf25.hostedemail.com (Postfix) with ESMTP id 3A2A2A001C for ; Thu, 26 Dec 2024 17:06:59 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=MFFMHp5k; spf=pass (imf25.hostedemail.com: domain of 3UI1tZwYKCGMTVSFOCHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3UI1tZwYKCGMTVSFOCHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735232801; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0ykFNL3Q77su3l1hmkCa41V3LHYM0KyunH48W8conCI=; b=YjTKBq3Fi9OcToBa6LhD45X+h1XvhKRR19cpBFSUlu6Kel+/OpYPVZrXE6g+LXyh5OQz6t 0Es+I+3bEJLkDW4hU/bbv1rviKGvrJwJIRn7By+fKm9PRgVqAYfIFnQUX47CgPjRdfruXX ovdPlL2YrRJ276PiylGOonrRbzQchFQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735232801; a=rsa-sha256; cv=none; b=DZHNaiObwSm6EYbITnQ7SWEZj363R/wee7V0qzv0rzt0pu3fqFqJxFQ9wm8NR++e24nW8Q 70yxaVDnt8xb0fXR5ZhB1YomNuCrAt1/W5ctYM3yPpXD0DPGIyOsg83H1WL9SEh0i1+R+v TehAEYmP0stQ9hFoB9a9ayuWZ/eopNI= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=MFFMHp5k; spf=pass (imf25.hostedemail.com: domain of 3UI1tZwYKCGMTVSFOCHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3UI1tZwYKCGMTVSFOCHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2eeeb5b7022so8279890a91.0 for ; Thu, 26 Dec 2024 09:07:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1735232848; x=1735837648; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0ykFNL3Q77su3l1hmkCa41V3LHYM0KyunH48W8conCI=; b=MFFMHp5kwmequuw0ck8LXEbshiX2lKM0wSkEjtXxxIwy+sXma67AkBKNx+xItSKoJf +M5B7+7gb/AHe60o/BED4ZduiS6fnRnE6yyKJoQu2i6uUYNBfFvfzzX83461bhyjGBIo xMikcTPzbIjzXvCn5KgpBQ7AiQUxQiuToD2Y6B+IN9XibJUmoGeCkYZEvBtqF/c7ejvc KqEN12FLTIORQScbwuGFznVASfrTi9gzz75vBXel8U3wQnf3r/txG/hcRY1J8iVDWev6 LNXySXTglOS975hhzAhqRHDYURv967lhEEJSL9TUb1I98kYqT+jm6NVq1V9ITsOf2efm 4g8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735232848; x=1735837648; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0ykFNL3Q77su3l1hmkCa41V3LHYM0KyunH48W8conCI=; b=jlFi3Z7CyKPH2atFGIu0IWwvDwbafHGJCulVNRPtl9IEy9hbnFnUByqDTwSEW6HwVi jl8RPeBi3D3IsCTY/bhvxBglFRG23w6/Mu3RkZKErZ9oRJQRjiPfzHtZebqV59uot/Bs IWtYbsH3OnxiI6F7Yv7boiAtuiiI2uBeIsuTPbmB5fo7LiHPD5CvvNPeOoYPnI0q3Ax/ gg3D+m98mmbbPLyZtQF+RTUv/QOKaRKTsAJrJy75xLn6oMdbA9+ycdWqHzCL4qvJb6fa Z7Bho29vCoIk7vbzjBbvJBV0QVYItuA9U0v0bxE6lv3DbnRh/iW+AhNrW7AqvvOZp2X2 Zo7g== X-Forwarded-Encrypted: i=1; AJvYcCVXqct5OGHvNyIANuYlHU4Gdn8omstGB+P6d5K+rz9Lvkjl3zBtCspGA9G3vORhzWHP8jY7OG6olQ==@kvack.org X-Gm-Message-State: AOJu0Ywk/f1fSqW02cnn8ma+EzquXZNfnIgcNnfinsCvZM02FJ+tOzSi 815ITzF8cHx+edXEYydNF1OvBCwqiXuX1jH+xSTvKGxttxPqAdIivJ0q6/SUrIWc7t8DPJCla/F 5dg== X-Google-Smtp-Source: AGHT+IHSckZ6MRPuI2AW+AMmVk8LfVa6FJMQqBAOVn0ohTdOeJ5YmYmLuchDuM88M65uvkUM82mTiq54fW8= X-Received: from pjvf8.prod.google.com ([2002:a17:90a:da88:b0:2ef:8eb8:e4eb]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2b8e:b0:2ee:49c4:4a7c with SMTP id 98e67ed59e1d1-2f452e39845mr35060096a91.18.1735232848030; Thu, 26 Dec 2024 09:07:28 -0800 (PST) Date: Thu, 26 Dec 2024 09:06:59 -0800 In-Reply-To: <20241226170710.1159679-1-surenb@google.com> Mime-Version: 1.0 References: <20241226170710.1159679-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241226170710.1159679-8-surenb@google.com> Subject: [PATCH v7 07/17] types: move struct rcuwait into types.h From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Queue-Id: 3A2A2A001C X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: 47o36h3q1rjbn6f1mhyp7ea17nubixr7 X-HE-Tag: 1735232819-935355 X-HE-Meta: U2FsdGVkX1+uUrcJ7nc6DDbaXms7w9D2RiSyb6YteqpWYKsC+wA//zYDAmhVWmUcDXTIi8oe16EoFeRzQiW4YVnneoStgq9ZPBhS3JsEzf38wm54Flr7jUT9xhEDyVJmRWMlIJqNkQx41xOg/XcavFhdshEbwQOMoawZCkpmUy0IBex4pySfyg60h8JHhi5eFnQjOa2K8jsYQI4AVP5Y/BR0yPFshUx2sI/yRdvNQgrUMCoAS4XOesrmWqx9uJ/kUyI83MljM/W69G4A2yC2kPbeZNWqND3TUnjQhG8s8n/AAj/28t8kMng2wtmAW9RLfkCZUkdtah4pmMCKwHaufHbTMLxMhKi6dO7WjxHyxVhqwswWR0fWua3idEUrI4jyxd1DAKqCxfj9YxXLod2HlS1GpceZW8PuREnApKU/ADsQjhDm8CSUkqZgkrznwxqeJjXeLqCS4awmRIAmcKYkBm4bBthvtRZ/u0bgAiz1IyTEoMDeMXRbczRRu1wJLYImN3uQel4UMvAVBzy9ltaszKotH5h2hRijDbutV0/Umj+cXWtID29k63CZxrHaiKrI93DlnJ346OVbJ+f7/V0ODpg8jo2rr/aXH2Iw1n0SsLySssnjPf914bN4V2L6lrTwgs/XKBTXq4qHtjA14a8W9nPfnV2GXyUUrGWxw096UhvVDNS7b7T+OWhFGT/rdb9rCS1xpey4CCqAbrbUPc3zepFPdaWJzwQDWUAhQHoobscgLMQQr3f5uLwzeZKXpq0ybH12JixxI22bxRrd7kgc01fk15cT4mWSEF4joCBJw3hxI/ikjiaY7ISUUSdLMXahKMpx8FG1GXa/5XazCdg9xuIDe5M85h9muZgymgRApcIv3ELz7w7rWCeiHCRnHtGSMrZBxmkGF3TBgIBj/9SS9BdeGQY6huk1+jAHzNN3gutngnEq+irD9e8SEQ4esm8lMGUfcoGLw8Uuj76qWk5 s+oKyFPm S3QEFrDVdfuOL3TCTYs9p9uSQoAGO8TbZNr0ON2fzljY9z3AcHz1snA0xokzL4ZT9/KkuOp2+ZmrO6l4iDzIcsdUu0kI5xWCIZyXVFrH76uxQ4m3qPfxHvhF68t3i2aLakVcDkiUbDVqFU1PlzUDH2YUuBnes1Z1L5VDhd2a37bwMilSx444ya0sr4GOm1QAuLzhxQ+mcOHj3sZr+IuW40/0BZrOJHEcpIxsQJGq0+ojn8+3jtTSxWoK1YyraQlUEYdOruvUxLMBwkVEtyUM7EqhlL6P1p+Lx+aGjFKAe1YwV1RzO6mSpVa9xzeSc7ZqanGxj4IXRZauSIVK5LOeZg1isPVNhrp6O/ZDSK6fMvpPuMJ5EN7PJJ0PslPbxnkKerELjiJdHZFeAPcnaYmfFCEx9xLYddTXw18/F0f8ToPJQWK7xpUqeQH1daA6q1uCwd7cZZXa0vWc7FCegrDhWbPvewTa8WbbmN1CuMqMBz0ZDzvF3BKmNBcPH77oyKsjvUo4lxmDwjQRYFIZQARa5XFkfU34sTZA0t0/Fl3R3BtJE8+Qq13fx57MJ+uaTGy9vWsK67fzkx0uUAtzSb3aud0BfPUL24ke0hZjJ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Move rcuwait struct definition into types.h so that rcuwait can be used without including rcuwait.h which includes other headers. Without this change mm_types.h can't use rcuwait due to a the following circular dependency: mm_types.h -> rcuwait.h -> signal.h -> mm_types.h Suggested-by: Matthew Wilcox Signed-off-by: Suren Baghdasaryan Acked-by: Davidlohr Bueso --- include/linux/rcuwait.h | 13 +------------ include/linux/types.h | 12 ++++++++++++ 2 files changed, 13 insertions(+), 12 deletions(-) diff --git a/include/linux/rcuwait.h b/include/linux/rcuwait.h index 27343424225c..9ad134a04b41 100644 --- a/include/linux/rcuwait.h +++ b/include/linux/rcuwait.h @@ -4,18 +4,7 @@ #include #include - -/* - * rcuwait provides a way of blocking and waking up a single - * task in an rcu-safe manner. - * - * The only time @task is non-nil is when a user is blocked (or - * checking if it needs to) on a condition, and reset as soon as we - * know that the condition has succeeded and are awoken. - */ -struct rcuwait { - struct task_struct __rcu *task; -}; +#include #define __RCUWAIT_INITIALIZER(name) \ { .task = NULL, } diff --git a/include/linux/types.h b/include/linux/types.h index 2d7b9ae8714c..f1356a9a5730 100644 --- a/include/linux/types.h +++ b/include/linux/types.h @@ -248,5 +248,17 @@ typedef void (*swap_func_t)(void *a, void *b, int size); typedef int (*cmp_r_func_t)(const void *a, const void *b, const void *priv); typedef int (*cmp_func_t)(const void *a, const void *b); +/* + * rcuwait provides a way of blocking and waking up a single + * task in an rcu-safe manner. + * + * The only time @task is non-nil is when a user is blocked (or + * checking if it needs to) on a condition, and reset as soon as we + * know that the condition has succeeded and are awoken. + */ +struct rcuwait { + struct task_struct __rcu *task; +}; + #endif /* __ASSEMBLY__ */ #endif /* _LINUX_TYPES_H */ From patchwork Thu Dec 26 17:07:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13921389 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C234BE77188 for ; Thu, 26 Dec 2024 17:07:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 85E8D6B009B; Thu, 26 Dec 2024 12:07:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7E6896B009C; Thu, 26 Dec 2024 12:07:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 684E06B009D; Thu, 26 Dec 2024 12:07:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 3D7526B009B for ; Thu, 26 Dec 2024 12:07:33 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id EED4B1C9103 for ; Thu, 26 Dec 2024 17:07:32 +0000 (UTC) X-FDA: 82937740242.23.F73E43F Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf24.hostedemail.com (Postfix) with ESMTP id 8C743180013 for ; Thu, 26 Dec 2024 17:07:25 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=aXY5LKrQ; spf=pass (imf24.hostedemail.com: domain of 3UY1tZwYKCGQUWTGPDIQQING.EQONKPWZ-OOMXCEM.QTI@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3UY1tZwYKCGQUWTGPDIQQING.EQONKPWZ-OOMXCEM.QTI@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735232831; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Jqq50JvZ531gII06X2js4uIjNkeT7syCyMdKPeFhteI=; b=2D+OuTZkBiH5mQj7Fx1ke5WSu73tGI9w807zlbd/czXtmC2gnG2SRp7TcQ/yWRUk0JMeEm IWAie5XxBZ6MPUhrEB7Jlt/jtcpDIqn7BDbGpqmb3h++1sx7xM7IgHvUFJ2g3sEn6/WjYK tKM46OlMicjYvpBTUfb8DEn8MuLo/mY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735232831; a=rsa-sha256; cv=none; b=pG2W4EwExZsegJbn+jBpefwnAT48CoKXxxP6V85rhXW5kso6jkR4FrZ3zkfWecM2WEUlXe uVkVg11bBFL/+HxSbI3P3vG1iBu+Opq86AvSOskve2wjfykxQezXgTZZmdjkTJi81A8W0S ay757DQnldU1kkfhU1XyNx+SroZJNbc= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=aXY5LKrQ; spf=pass (imf24.hostedemail.com: domain of 3UY1tZwYKCGQUWTGPDIQQING.EQONKPWZ-OOMXCEM.QTI@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3UY1tZwYKCGQUWTGPDIQQING.EQONKPWZ-OOMXCEM.QTI@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ef775ec883so8150957a91.1 for ; Thu, 26 Dec 2024 09:07:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1735232850; x=1735837650; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Jqq50JvZ531gII06X2js4uIjNkeT7syCyMdKPeFhteI=; b=aXY5LKrQlUiUlVdvYgOR3iHnpn30zhE0q93c6clwoVigY/RUNsqgzCm9L0Po/GswRM EvnTD2p+ek0L/zwGbDQ4eYxy9GCY6Apps3iqRRcHyCgYuqbRSgrO20ErUYS+39vDD0ox piNUd2FMv6YHO9RvNoCtz0gS/OK+e2M442ydYz313OgTzzTiLTLNlk6EQ/WNqQyZuQIC 7en7kEa9FhpUbq8v+lIItCrOHmo5Cq0hBdR/Z4XmyyW/rr4zl/Y9jWIJWvscIGA8hVk4 PCnmZTNfcs9ftb5+cWlhLEmuE/0PGPMHVTey0ln1Tfk6mA9wpzIBK0OVnGwL7q0NjVAh z60Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735232850; x=1735837650; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Jqq50JvZ531gII06X2js4uIjNkeT7syCyMdKPeFhteI=; b=o6ZA4FxgE1MTX3CwcPAp/MIaSJYc9GxMnC1DjkQDdnIQ4XGa9yZswow2DzUEdzkypv jWMN9tnEyDRWWNmbF8cRqcb5L4zlW3fx2pM4vMecfrB2RHWD3Y5dNdzO2uceHZSJkIHp 0RKyT49eq5klIUoocT1xCD9FRjYT55zoCRXicEJQG6myqRo/oTOgjtrx3OdhTW79xLaD z7R6CnlWoWnQ0xo7kkna/T6qd9HTpIUMsVvIl0bTMITfTB9ysIf21hAxpWYhIq7zrKrR OscGDwuOGc7yizKzNHKAwQkPllK0tNs90JgZw2/yk7wvTdI1mfH8PXKv7a+6ADeMEMoX jO1Q== X-Forwarded-Encrypted: i=1; AJvYcCUS6Id6i5j2LrpJeFc9rvcL7bEdO5DrCZHGRMV9HVSzJVsbymnfCIn7GSzQgQPX327cYtttWfu78A==@kvack.org X-Gm-Message-State: AOJu0Yyxcmzmo8bz9ID48EhyOna5jJG8/HKLaIlLTUYzHh/foTJEUYRN EMHM7IGdq8//nyuJL1QqmSa2liSpyH5vVZGJYZNVmjCh69PgR3c4X7qvOoo39H3ibnYol8jgTZR 2Kg== X-Google-Smtp-Source: AGHT+IFQ/9ir9mO2gzhzka3p88zGCJjxZamQMl1FNnXg5zmv9rf3CH9hBVzxn6/eqqjJbIU25WLMpAjn/FY= X-Received: from pfbbe9.prod.google.com ([2002:a05:6a00:1f09:b0:725:936f:c305]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:6f0b:b0:725:b4f7:378e with SMTP id d2e1a72fcca58-72abdbe0cb5mr31015210b3a.0.1735232849837; Thu, 26 Dec 2024 09:07:29 -0800 (PST) Date: Thu, 26 Dec 2024 09:07:00 -0800 In-Reply-To: <20241226170710.1159679-1-surenb@google.com> Mime-Version: 1.0 References: <20241226170710.1159679-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241226170710.1159679-9-surenb@google.com> Subject: [PATCH v7 08/17] mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 8C743180013 X-Stat-Signature: iggewd3mtypzj5zrzmn9k9sgm8brudzr X-Rspam-User: X-HE-Tag: 1735232845-373938 X-HE-Meta: U2FsdGVkX1/+NfaDIOTnCCNlV94B5TiWcgU+2xE/WjGvFhnffHudu7DFAooHTiEtMqJoEagNIMo6kV79mufC5Q8TCj9QWYU4iczaA36bkYHaoeGmuZM/wY97xbD2d9K7kNVltPejtIuyDE80HdoZSvi9yiSWB/GfB/xLpDKJ7oVIO6GzGlE0qEY5P2CEZPCjjR6K+PQTfrNLb4Da9/3DYePB5HGD8n+TYA1rFtMwrJEMtPHxZnmSdmRAyyc5nZ80OckS9ipDREcKsO6C7Y1mRdRjD+tKFHftF7FURqgQvo21Iv8CJvuVYEfeR9eIkn8ju4iE0L2gtL1Lu1JBAn+PN6stNcyV2zDjQv0uBeLODtx4C0zqJKsKCgo5B13EAMWDdsBIV3HzqbJ7L7oAsQZnP2q8u6RyXmEXIWx9RGGqIiODnRVp/dlaJ1t/HLeZ8jmTUIfII06rGq52Dy9tDUQhuTmWbShiSLF3UZO3IgC1T/y4b3ZCXPF+Mr69iLeGpKLktq+XgNLz50iWWTXutU2gJDD+Ywif2dDHdGWcz+YXcJGX8ZIIMwKIy2t2FpePJZdVMi9+nCbdumcbm4H2rirYUs4rnofAWZTkarLCBj5kbNRtL0PCE3DUrg3E65F+7Jf2lLCg5vodSNIM7/NW1TrD25freGKgWJvDSGFX1FrdAhp1Hhl/OFrz6AdM5bUu90acQ/FmqTxMzW4NqwhZFcJ9wwNKoACyru6qDaWuu/bO4APi9ltUdqOPcvNo/iP+UzmqjtXVwmUA68XQHst8vLWQRVQmV44Aqtoryq6tnV6fU9oitbc3k5IMhJEtVnRp7Y+J8cAcssUimEOXCaLYQVPiG23JKzMyEsT4519mbwAc/PmTTKcauBEtb8AliCf/SUa9Bti2L20v4SKFNqKxURVpdeRGa6M+vup4jM/j1QgucqHBzgt9IOXppeqR/btVWNepVrh5HeaHApaJqhJY8J/ SpZ8Cwd7 YoLz2z6B3knW81pRrh37qwex8+B1zsTejvpr4cwzl+CQQhaO0TqjSXSN5I4G7qaVNkT58ezO7/i/Xb0lH4gbUa69wiNvMsyvKGjXyOYPbtz57j46vPIYmVKglAZVfHUEEEUq+AJfKp7MUiLYLvwCoxiMmR1yNN6WbLNB0zGlMdq6zUgUJTSAFUl+Rh4+JoiO8xygik3alB4lulbOlPGoW8o5ZpOM3RPGU1+VSrwFHYELLUCPEX8zau4+ttZ3iA/WpAgwlwgU9auh7phi2ujlrRn1+zMZWWoi8sURF8T44GSmmAyiPqa+Cy0HI2phUaF+BJ6t0ze/HiIFBDWS2cgnL/vgAykSc94xnOqSmNwmHUsaVjh0cnWeYj2GG4XGPXJfspioCkuBFsWXhEBS/mDt6YqiBdtUB8YcE+mZ5FAIiVsZGVRW6tIp1wthZDXg6+WLyWUEgpAPzWQSttCYPjMJTaE3BE0xMWfP2TKxpcRGirOJBdXTqnH6OSMm81tlUZmEbzyfNzHePs7RBBSBXW82sSF2oNTQ9aN8bQhtH6WB6Ya8Ab2m3KrHCY0M+Tio926fszMQ8ImAamG3qtJ62NnE4s4Suig== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: With upcoming replacement of vm_lock with vm_refcnt, we need to handle a possibility of vma_start_read_locked/vma_start_read_locked_nested failing due to refcount overflow. Prepare for such possibility by changing these APIs and adjusting their users. Signed-off-by: Suren Baghdasaryan Cc: Lokesh Gidra --- include/linux/mm.h | 6 ++++-- mm/userfaultfd.c | 17 ++++++++++++----- 2 files changed, 16 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index c50edfedd99d..ab27de9729d8 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -747,10 +747,11 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * not be used in such cases because it might fail due to mm_lock_seq overflow. * This functionality is used to obtain vma read lock and drop the mmap read lock. */ -static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass) +static inline bool vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass) { mmap_assert_locked(vma->vm_mm); down_read_nested(&vma->vm_lock.lock, subclass); + return true; } /* @@ -759,10 +760,11 @@ static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int * not be used in such cases because it might fail due to mm_lock_seq overflow. * This functionality is used to obtain vma read lock and drop the mmap read lock. */ -static inline void vma_start_read_locked(struct vm_area_struct *vma) +static inline bool vma_start_read_locked(struct vm_area_struct *vma) { mmap_assert_locked(vma->vm_mm); down_read(&vma->vm_lock.lock); + return true; } static inline void vma_end_read(struct vm_area_struct *vma) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 4527c385935b..38207d8be205 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -85,7 +85,8 @@ static struct vm_area_struct *uffd_lock_vma(struct mm_struct *mm, mmap_read_lock(mm); vma = find_vma_and_prepare_anon(mm, address); if (!IS_ERR(vma)) - vma_start_read_locked(vma); + if (!vma_start_read_locked(vma)) + vma = ERR_PTR(-EAGAIN); mmap_read_unlock(mm); return vma; @@ -1483,10 +1484,16 @@ static int uffd_move_lock(struct mm_struct *mm, mmap_read_lock(mm); err = find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); if (!err) { - vma_start_read_locked(*dst_vmap); - if (*dst_vmap != *src_vmap) - vma_start_read_locked_nested(*src_vmap, - SINGLE_DEPTH_NESTING); + if (vma_start_read_locked(*dst_vmap)) { + if (*dst_vmap != *src_vmap) { + if (!vma_start_read_locked_nested(*src_vmap, + SINGLE_DEPTH_NESTING)) { + vma_end_read(*dst_vmap); + err = -EAGAIN; + } + } + } else + err = -EAGAIN; } mmap_read_unlock(mm); return err; From patchwork Thu Dec 26 17:07:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13921390 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 246D9E7718F for ; Thu, 26 Dec 2024 17:07:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7E7C86B009D; Thu, 26 Dec 2024 12:07:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 794A16B009E; Thu, 26 Dec 2024 12:07:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 63A466B009F; Thu, 26 Dec 2024 12:07:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 418696B009D for ; Thu, 26 Dec 2024 12:07:35 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id E960C1C9139 for ; Thu, 26 Dec 2024 17:07:34 +0000 (UTC) X-FDA: 82937740410.24.FD3D290 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf07.hostedemail.com (Postfix) with ESMTP id BCF6F4001A for ; Thu, 26 Dec 2024 17:06:21 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=v1ZQmY2j; spf=pass (imf07.hostedemail.com: domain of 3U41tZwYKCGYWYVIRFKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3U41tZwYKCGYWYVIRFKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735232824; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3UFX636acVjy8McvbWwNjOMeHy5jkJa1vw72jY/LAWg=; b=SPLRQYEvtzWWQn/pL7mQE7diPgED1Rr/tJzUOMDc8VQbuK39aYEI8nZC1vZFKoLVR4WFEU UnDZIxzNYXequPtz9T5vAAeFiBmf4LZsfyPB4OcR9cveXXtAvhrjEsOIMsTo3hoHUKvsi9 7w3rfNb8ZT5EJCfoyxcSkiphXAk0EM0= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=v1ZQmY2j; spf=pass (imf07.hostedemail.com: domain of 3U41tZwYKCGYWYVIRFKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3U41tZwYKCGYWYVIRFKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735232824; a=rsa-sha256; cv=none; b=bVYKwlcv/hQlEvuYZa8/HU38utSfU1/BXvhn/lzIHOF7rTkHtplIp7EKpvnrkaWGzw7S3M sXeYLpu1eWlH0T/XgRCJZ8NWMZpvwT5YKdEdGmEN2LAswwnW/b4mWutdmN7k5oKsmejifr l+qPPw8ii3xXnYR2jEO+FWw04AMqLyg= Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2166d99341eso84110135ad.0 for ; Thu, 26 Dec 2024 09:07:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1735232852; x=1735837652; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3UFX636acVjy8McvbWwNjOMeHy5jkJa1vw72jY/LAWg=; b=v1ZQmY2j8NEHv3Y6zrfymU7Q2esb5Y+npUIjtywy1zVSYvj3/veG2wpsdhiyW4gH6Q F9noMVOkO5uua9KzLwzihEu61kMZwjfEQmd3co/JW4b3VND1KtfAMIJV7ofo58+eAcLu ZeFeALFsVouibGEhTqQlz/s205q4y6qipwlqrrUcobhfzhHwqFH/hEVed4GGdnZ+hu+C pjv37jrKYwCQn59KljPN2pU5J8In/k/5NB/Bfoc5usMePULDf/XWyyumFNpkW9WRyd6I pZf465rjWtYZ4nzyrqIT+vm/3OsQzwvNP1gf8HQ7P2+cgt80+d6tgFGEnfIPm8PDWILl j3lA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735232852; x=1735837652; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3UFX636acVjy8McvbWwNjOMeHy5jkJa1vw72jY/LAWg=; b=MNh8NNyTBIZ61uadWrx+W86PjmQ7+qGlg/NSpQFzOvVJmpQ+6YbykhCXt37jQQHJlV eDIr5BDx6ts2fIyMHseiDVYWOvmU45xj+KppKRt0b0QJeUJyG7d+ZqaTWqMDP6XsegG1 s/wjHD+DFQ6jdXPB8VBLNh8mNDlSEoldfJjAuCI3GVg3MLFSFFw58CdOdeFt/5yuCF2n XfLKLeQTUjmIcTnro/UupjH18GdwAd8/dD6k1e8azSsqO4cskW5FZdrBdnwYDRIsET/w DaWFwEjKP5nftDWojKCZdnv505QRR5iBFjwPaa0psJIvZjvr55qkmwoJWcjen2UlooQq JV3Q== X-Forwarded-Encrypted: i=1; AJvYcCVmqiJMV1WpQUgDanNmLVVqrI/k5iCXzwWzkdj19X5T12el6AoC2CSMkQf+b5jOTmEqOdbDWNBdWA==@kvack.org X-Gm-Message-State: AOJu0Yyu+7NMMcH/STF01zZkkDK/X++fq4dH7EjEpdboQrDe6qGf3gqF fSwSnPOuDwUrIebKhDoCOrp4yiZTpUKyjUcom+WzkQO28yAtdK8+NtZ9sKYqMvKTOTzNxhjesTO 4Yg== X-Google-Smtp-Source: AGHT+IE+Il4aaMscBOZzUniftz3cly+CPDHxMUs/tHLaE2nxYIYJF6s9V97DoMW87H97O8rFF+TQEZN/e8w= X-Received: from plsp1.prod.google.com ([2002:a17:902:bd01:b0:212:5134:8485]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:2cc:b0:216:5867:976a with SMTP id d9443c01a7336-219e6f1176fmr305791325ad.45.1735232851873; Thu, 26 Dec 2024 09:07:31 -0800 (PST) Date: Thu, 26 Dec 2024 09:07:01 -0800 In-Reply-To: <20241226170710.1159679-1-surenb@google.com> Mime-Version: 1.0 References: <20241226170710.1159679-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241226170710.1159679-10-surenb@google.com> Subject: [PATCH v7 09/17] mm: move mmap_init_lock() out of the header file From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Server: rspam05 X-Stat-Signature: ejqhasr7gzk4sspz91okdajhq3mc3ad9 X-Rspamd-Queue-Id: BCF6F4001A X-Rspam-User: X-HE-Tag: 1735232781-323995 X-HE-Meta: U2FsdGVkX1/dOA9V47+oomvqk3iXVATLGXcb5NI6Dozad8UctpDTIkfM8fnBf9zAxtc4ZRxFg3bOs1lT9+oC+BjasZcJ3MdG32tYymz88KmjzBzm2jk2E2jHlMq0VZ3vWY7JO0c+geup4USB83VtcB5t7IHyy1hn0HQZaKH4G5pOPHD3wd54z2ipZWVtrmvCclj/wqlrDIoFO92pOQAbZj68BKBaQ1RiuU6Q9V9J9XpY57zXAabjDZ6rQ1zU+AJCgSreoPRLdWdTJSZaqNWdewmDgZDLayBXQ/FYPq9yyxQI0TgHeBWrAgNCBFhDO5w9OR6qtU1fenhrByWxo68yOb0EPOqcMWz5ItJcEu8LqyzdmEJpuaA6dJo+Zk+jcdJfMMmcE1fTzdPLat/YygS43KhZARaaU35uQkt7wkGKoa2eNS0zRAbCbCocNUOGlz6abHncx2CE5qKiYZvb4uKlCBhtkEzkZWTK5Z8Ynooa/n0G35jvlDm6L+fk1SNgV41H6cWNk1mv+JCxVKBoQondUBtEFS44Gf3zbeiWTREmiI0av024pjZQxE/w5YmsUXyrRW95tFbdgOOi0PxdPMcgHTmrtOkC8sMzxACp1JeZevuEHBFiw/7lIEFR5ET5hbEG4levN1TJSwT66GjLvCt029lS5fg7hC+1oIoGLWBfAY6L4N1AMX8y+PurePRZ5zyX/FvVohwid9Md0edxT9iP4/wy8O8zlqQIgA+ZrG1VWwCixjY1C1Xwh46HDiKAJDddXJSao725jwqYnEKV3ZvmUch2cdjIzVSlWe5RZGPwlLBXXwp227XmykNt3OpEvj1ZOpRcw9qbCa4MAPn4wQDZUiixKQhKjxMuXXzENPic3ytKcn80FqWRuOf/8yekO5uFxpXcTFfPr4cyk9xsxbBGVBaM4ysG+6qlOMGzHpek9JwlXsFmfmXKKmkFcga7KOCxeTclfGutNV+e/lJ1S9D wu7zZMEL 926I1k9gbhjO/Rb0QtaDRQ8zrDBz1rqm7UY1tj7oU4/Cv+L1fzdcbprG4S7FoJ+Dsj+JVXDK0j0enSJPBzBrceUpQguP3pu7TFGh5zyHKswfVGUDj9IepbuXi/5CZ/okLqOcDT/0J4xCd1E7W4zSY9lgDtWjK9RY2XgT9gNgMyUjPxjD0atlONN+SixQ1tN0NrnLOtFH7n/ytzGek4rsZTRA0jVAI3zsl96hHuzE9XwflVE7AJ7Mic3gUdQcl3abHGrnWL/SlzbA3ev5qUnWxrWhZyk18RS6NrD6HGd/COdsiPjuUyraQOdD2aG4WBaTy2hMMaLkAfULsj6DKBr13g3rKJner50NlhBUJgHMO8d72Py5g7Waaq+wZJJPeCTj85jUxJZwE7T7IWk6DsZy+kiY5LakKwMKU8vRM5AVYA3Ll9Ymg1mgl2/JLMbMOxWFiezRRadb1Mu4DF+qhrzohl1V4a0jurlwLikJzuGH50PHd1DySFO4Bn7jbmYDsqlbI6KpftcYLmWJV3AYT82kXTsVutiVteanjYuDaaJ7MbS7Iw11//yapDmcP3OdCp76ifGcCT0fOQLpB/rGItlgZ32M6Gg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: mmap_init_lock() is used only from mm_init() in fork.c, therefore it does not have to reside in the header file. This move lets us avoid including additional headers in mmap_lock.h later, when mmap_init_lock() needs to initialize rcuwait object. Signed-off-by: Suren Baghdasaryan --- include/linux/mmap_lock.h | 6 ------ kernel/fork.c | 6 ++++++ 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index 45a21faa3ff6..4706c6769902 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -122,12 +122,6 @@ static inline bool mmap_lock_speculate_retry(struct mm_struct *mm, unsigned int #endif /* CONFIG_PER_VMA_LOCK */ -static inline void mmap_init_lock(struct mm_struct *mm) -{ - init_rwsem(&mm->mmap_lock); - mm_lock_seqcount_init(mm); -} - static inline void mmap_write_lock(struct mm_struct *mm) { __mmap_lock_trace_start_locking(mm, true); diff --git a/kernel/fork.c b/kernel/fork.c index f2f9e7b427ad..d4c75428ccaf 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1219,6 +1219,12 @@ static void mm_init_uprobes_state(struct mm_struct *mm) #endif } +static inline void mmap_init_lock(struct mm_struct *mm) +{ + init_rwsem(&mm->mmap_lock); + mm_lock_seqcount_init(mm); +} + static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, struct user_namespace *user_ns) { From patchwork Thu Dec 26 17:07:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13921391 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56586E7718E for ; Thu, 26 Dec 2024 17:07:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8F8EE6B009E; Thu, 26 Dec 2024 12:07:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7BF106B009F; Thu, 26 Dec 2024 12:07:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5E9A06B00A0; Thu, 26 Dec 2024 12:07:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 3911D6B009E for ; Thu, 26 Dec 2024 12:07:37 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 010E41215F7 for ; Thu, 26 Dec 2024 17:07:36 +0000 (UTC) X-FDA: 82937740830.18.B7D39A3 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf21.hostedemail.com (Postfix) with ESMTP id C81FD1C0019 for ; Thu, 26 Dec 2024 17:06:14 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=tIQiWb1Z; spf=pass (imf21.hostedemail.com: domain of 3VY1tZwYKCGgYaXKTHMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3VY1tZwYKCGgYaXKTHMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735232827; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=itcT003jdnlfYy/z4AAxsZgzhf2fL1y4OTxlsN9l2Ik=; b=EcPMYpDnhcCk9ITKGEyftGU2e+zTbm7ZSChH4FNRWLqq6x1SnuIw6pcFQ26rjo+zzFjRqJ JNCfyGd5kJfn1nZhA1v0SVYw1pKnVDCwDpx+QleJSCy98PBuH14nKH/SgABkwpt5a3Qiam 3fCX1kdg6odLniAlujtLojrK3SOPmEE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735232827; a=rsa-sha256; cv=none; b=VAMGqFy/twVN/hjV+6lQds9bzvWQSzSUeX4H28VDuM4+n1YswQ6TeSnAJcswZ286HQHqu+ XfOgXYNDK2nq197jb9WTbT20ojyB2p1MEmfu/OVRXV3PKewzNU6vNWvHy4FoJuZ5lloeBU 9gNtcOB+eL/AjoBD1OK5xpRyFkVYsVk= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=tIQiWb1Z; spf=pass (imf21.hostedemail.com: domain of 3VY1tZwYKCGgYaXKTHMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3VY1tZwYKCGgYaXKTHMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-2163dc0f689so128749245ad.1 for ; Thu, 26 Dec 2024 09:07:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1735232854; x=1735837654; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=itcT003jdnlfYy/z4AAxsZgzhf2fL1y4OTxlsN9l2Ik=; b=tIQiWb1ZV7ZgZmaRB67F+UGMA9eJYjM2AAhw1fpJiU0Ql7kFebDT71Aek+UnbpbCoE bLuDnErN2DbvtgPI/46qhJH5973U3na85YOiBKzf4WNupfbUYXBqkrSDGdwLz/h04Jcr JyGZOtOSZuuc4Bgx1NsMSaqPXKJM1bokeez/87So1iE5WmUNvtqTQdaKwgtq5E8NfChe KcTNSH62SMp2Yr0wl2oWUrXcGF7Eqt4awxTKVWQarvO6GerA0aTygQJzu/73akrBoXfY 3IwROE7MprbP18jakfnQPfRVDdlXcVo2vuPG9n3Gv5tfnMvE4xb/0QZDz/TdwTdC4sPG kKeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735232854; x=1735837654; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=itcT003jdnlfYy/z4AAxsZgzhf2fL1y4OTxlsN9l2Ik=; b=VlADTAgpTbdQwIc1UpEHv0ybWYg1J7CZKQUKHOgZlRmNqOChu9magGSYTH8IT43ysl BB/PegKlin4mcUTY5LhMlNxx7tmAiljCB3au0/SBVVNM5jVT+JGiNRAEvciLqDOpEp4f 5QivPkcZyqyF10yq9gFviDT6Cn4bTA3r3Z1xHBUK5m9MwgAyk4P/lU+RroiMw09gjrMi 3SXEUYEeXy//MdQt50vzW44XbeqaC9YqONLv+Uld29U5liyj2jCfu5dyoS1gdXgMCswP 3t/fUvL/MxJhbnUV/d1wHda4DpLediQhabHUT5Rn6/Jtf2PqXvxxsBWtFmUly8wG0Gp4 nB1g== X-Forwarded-Encrypted: i=1; AJvYcCWgUCY7U2eiEeKbbMrs+8zoqODj2I4dnjjo0CBEZXJK2Zkm5pYt0fNneg5u4z99J9D2IQqEvlzd7A==@kvack.org X-Gm-Message-State: AOJu0YzW8Qvp94GNRr8UDRc21X9BxA17lMWZj4HU7oqI7A/EOq+IHRod K+fkqv/yUJoq99Ekauw6MMVlbtL2n9wff8aZVBnapUXXp3zjfnBVHQyLFWCGK9mW+hqp6pgInkX HQg== X-Google-Smtp-Source: AGHT+IFK/ZcPw9QQ8rKy9Iz7tN9PwEqV+omG9KoBZdHVze2si2OKYWeWR6yvQPLae2+NyznTr7bIa67cCm8= X-Received: from pfaq3.prod.google.com ([2002:a05:6a00:a883:b0:725:e76f:1445]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:4308:b0:1e0:c8d9:3382 with SMTP id adf61e73a8af0-1e5e0847084mr38655491637.45.1735232853953; Thu, 26 Dec 2024 09:07:33 -0800 (PST) Date: Thu, 26 Dec 2024 09:07:02 -0800 In-Reply-To: <20241226170710.1159679-1-surenb@google.com> Mime-Version: 1.0 References: <20241226170710.1159679-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241226170710.1159679-11-surenb@google.com> Subject: [PATCH v7 10/17] mm: uninline the main body of vma_start_write() From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: C81FD1C0019 X-Stat-Signature: jm6koumsmeg419buy45a7qn1n8n9h19j X-Rspam-User: X-HE-Tag: 1735232774-78787 X-HE-Meta: U2FsdGVkX1+Uk4b897b6RtrGxEGnEQuhnMlTonnWqeLbzlq/5huz+HgddC4ZcrU5hJ3mhMjW44Cq4nq2ptixRnaF/7o+mDJ7JC+Y4XB2oep/Qozah6gXPSrxNjzot0XuEWMVJUndYiYpaXscoFVhzDG6vdS1cyRxdrzHJFXsp94OiwEu72CXN2udOtP8DOO0M90M1fMV/iGRRr+t1OYmu/e8Snnjk3byR3wM3+h7Xl2Q6rZsXK4o6EGEPaWtTCK+8+T8bq6Ww1onKAJFgJ4MQGjgv7pRhC9TKyVMuY8zquGiDWDKjQc9JciyCUXus4K5Gjgr8PoO5WuP7jEyybwZPcjFluwkOqLM79i7TiqCcMz3ghMCIDqS7I5nF8UWPjZQpdt1Fu0A2RWT4hRGk3ARHtH45M1s3lejMb7lTyBdFThH/kk04tpH6ixPuXv42HT+cZ6YFjnygLooKnI4TJmXtlRRfKHr3QTKgMsPJlSVJs2L45C3CpPHEKDnDgIFptXLV2MJxClvDNEv6zc4tLZwjnjhNhS3sSgd0fkP6l3yKgU4YKqpZIa4NUHOm/HXeb6e2uDWuQLCzCk12w0s9sDhtudhYXbPR7toDnQBlsJHgKyImtrfPd0sJtVCh/gu57ev620ihA71Nb8pD8nSdWDaflmLXtoW4+T663/O4KSJAfV36Ql+mcmLyq/X4T92RswfzDwlH/TzanJgSIT2vzjcFg0jp0l2NJvUe8NMzFV6FPlxhYBYoIgeLi3IN1qDxtEgdc5CcvzuvkFYanzbcpzmisyxf/FJFbjFxqCDZ/BHRtbQ3/FKn57ZPGONF6vkB5eN4/7CcEc9tDj2Z4ZlAWjPvnxTcliGvutItiJteAve96E205aUhdImoRLCpnHLLx91yMwA4cdy0/KRit3Omfi7vvaQJH8CAGgKVloUNOEVskzfowAowJIpkrV9cnF93Cyj7Tdv8rMMriMkloV2yao 973f94xn 2GiHDJAAisOfGWu5N6fbF4/X0eC+5rWWQL5O8dN0V4+nE83VxltsSHi94zFQUQRrKvjlFJCPnI9le7TjnFGz7tq3FX6E2U6V5cn4uwrhEMDENdEV79g9CJTs8wb/+dXqQaR7gdaBT/q6h0RgVKBWc28/xG0EIbf+zFoAzTOrn4i9hmtVK50EtUhS0W58BxllZicJfXjmpleJKLXKwX8YiXz8doQdbLMnaizNgY7v7UcT3iHSv/Fi2ZNcUXM3GS788BxKVjjxKyXjdA4OJuP9JO0o24BhHOidl1Ysg/AJiUSpSwThMAYb1twrfnJ/yIINJLmTbSp6n2QoIYoeUc8cko9J159bVQL/MVUXcfSzgOlOfRcpVctjBd2guTsMrKnOzkG8vY3OOp82OodM+QgcvVkTyy3bi0EB/0vhwJcRSqrrzhXAqNaHD+tPc5rNXvVYg3sbAMe1g6+4g/5oUezOc8lIVx3jRhMhx1MSscs0O+aYiY5JtLkoNFHfNcvzmgGC9J4znqKo+G8iEW+IvgNR7bLGrmtduviUUeSgPVTo7F5XePKGKGhhHMea3Ultl5AWjqoTjkqIpSIz/jkovQtcTVXPXXg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000075, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: vma_start_write() is used in many places and will grow in size very soon. It is not used in performance critical paths and uninlining it should limit the future code size growth. No functional changes. Signed-off-by: Suren Baghdasaryan --- include/linux/mm.h | 12 +++--------- mm/memory.c | 14 ++++++++++++++ 2 files changed, 17 insertions(+), 9 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ab27de9729d8..ea4c4228b125 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -787,6 +787,8 @@ static bool __is_vma_write_locked(struct vm_area_struct *vma, unsigned int *mm_l return (vma->vm_lock_seq == *mm_lock_seq); } +void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq); + /* * Begin writing to a VMA. * Exclude concurrent readers under the per-VMA lock until the currently @@ -799,15 +801,7 @@ static inline void vma_start_write(struct vm_area_struct *vma) if (__is_vma_write_locked(vma, &mm_lock_seq)) return; - down_write(&vma->vm_lock.lock); - /* - * We should use WRITE_ONCE() here because we can have concurrent reads - * from the early lockless pessimistic check in vma_start_read(). - * We don't really care about the correctness of that early check, but - * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. - */ - WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); - up_write(&vma->vm_lock.lock); + __vma_start_write(vma, mm_lock_seq); } static inline void vma_assert_write_locked(struct vm_area_struct *vma) diff --git a/mm/memory.c b/mm/memory.c index d0dee2282325..236fdecd44d6 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6328,6 +6328,20 @@ struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm, #endif #ifdef CONFIG_PER_VMA_LOCK +void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq) +{ + down_write(&vma->vm_lock.lock); + /* + * We should use WRITE_ONCE() here because we can have concurrent reads + * from the early lockless pessimistic check in vma_start_read(). + * We don't really care about the correctness of that early check, but + * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. + */ + WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); + up_write(&vma->vm_lock.lock); +} +EXPORT_SYMBOL_GPL(__vma_start_write); + /* * Lookup and lock a VMA under RCU protection. Returned VMA is guaranteed to be * stable and not isolated. If the VMA is not found or is being modified the From patchwork Thu Dec 26 17:07:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13921392 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF321E7718F for ; Thu, 26 Dec 2024 17:07:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9DFCB6B00A0; Thu, 26 Dec 2024 12:07:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 991966B00A1; Thu, 26 Dec 2024 12:07:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7BE526B00A2; Thu, 26 Dec 2024 12:07:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 510646B00A0 for ; Thu, 26 Dec 2024 12:07:39 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 0DF571A1518 for ; Thu, 26 Dec 2024 17:07:39 +0000 (UTC) X-FDA: 82937741334.02.6696BA4 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf25.hostedemail.com (Postfix) with ESMTP id 388D8A000D for ; Thu, 26 Dec 2024 17:07:06 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=CAmdzPRR; spf=pass (imf25.hostedemail.com: domain of 3WI1tZwYKCGsbdaNWKPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3WI1tZwYKCGsbdaNWKPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735232818; a=rsa-sha256; cv=none; b=WIOKplYI4Z9tR4TfeUCtc8DQHzsBB3i8XIZLd+PHCadoDNfZEJHBm9XJUyJb0tD03AhFDl U41dzHNiKZO6kscpR7yaYKfrNBIx9Kw07r+EgIwRAl9/PxerTSaAPmmb/i7cpczcltH9RB rn2rw0tME0dlrx1I5L6D4Rz5bDzJ/4o= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=CAmdzPRR; spf=pass (imf25.hostedemail.com: domain of 3WI1tZwYKCGsbdaNWKPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3WI1tZwYKCGsbdaNWKPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735232818; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PJyEpzu4DU8rJRmq3ow3hqlNy2LMWwwp3HWnjTLdNPo=; b=u9JBjjQalzvswYxwKsvV9s5FO1w3BHwnyRz/J1lWOLz2VzSwZPQ+Wbv7bzdq7Q+ZLi5/rE y/b5Fs65e/g6U5ZVViHhExmTuLArkgeDFstcx5RVV5TCmby84gRsd+l5KLS480/+DqFgKj H9bmI5lRK18iZfrXZ42FviDWRF68FiU= Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ef9204f898so8145851a91.2 for ; Thu, 26 Dec 2024 09:07:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1735232856; x=1735837656; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PJyEpzu4DU8rJRmq3ow3hqlNy2LMWwwp3HWnjTLdNPo=; b=CAmdzPRRml+aEaOprgnFMjvu609G7QNvEoTsam7424xo2M3AfkH2VYW0izR8vcD1Z3 wCM0jTg0yrwDSFSbb7H119nsmrXxhdQloz0n1OKNgHe8OQbhlXJueWhn5aae5GDwiQG9 Nh3l8jDFy0CiD6+RPe7nq8C48lE3kdg3yAPHGM8Y/ZMg7CRaQoss442j5o6LDsslx2mA 7uxQ+aRY522YnhBD3n7b0SBiFkIKlS7PXCXP8MO696k6CY+ZVxOOY1CIpp+CLaSX6egK PqMvRnjhEZD2nMAcLzE2P3glX5CT5PhOnIqzwS8ra4oWCq932I23vdjn87BwpBdWbQ5C AQdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735232856; x=1735837656; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PJyEpzu4DU8rJRmq3ow3hqlNy2LMWwwp3HWnjTLdNPo=; b=wCth8Vt0PrTVREO5EiCEHdRhXdak/xnVrL+5OERLMvOmUQDdMnULnaOeNENsrtyOlC 4qUubZnKrnKbORWg/ExPbTIxw4ZuN1uFuthZUIZ98hlnwcY+1U074ibt63d4r9BGIa4+ kjEVhB07YydUAJ7RcIyhQHGUJUcRWdoxtnpGMpmNJGGRYjB+p3V9hrTWg4EQqaGlBQUE o15Aix3GWYHWIwQA4WgbmNvw9MUJkpxI6DSHByczyxY+/5TKKohxEWOmtUP9Jsmi5aRt s2dQ6LtzkNOwFTqZZXj7wlw97XATDfpnbMKYSZ25NA9PGm3UqUFWTNouwbUHZp2ClKY2 TFKQ== X-Forwarded-Encrypted: i=1; AJvYcCVWa5b2w0GJoLtO2ebouK28NOruoA/kPO8aYNhHGrHYjh9uleP3Lqjequk8TszHx1FRC3Ur32UlWA==@kvack.org X-Gm-Message-State: AOJu0YwElBDRO9K3GphDuCvyseG8LH2ghdtyXENd5IFY9U66JuMeXaIn oKjcZRjEd9vTEHGUoFV+G1j/gTWMkMhN9te+gVCj+TWVMYbeTMt+qOTh5K0hBe+vBYu2OPmZRFT lLQ== X-Google-Smtp-Source: AGHT+IEl4ZXzZa5pPyzTXDxPRSH8PuS/I6bpKT2fhAEsb0xYuFK9NMlUOzOSp0EdHvBpo/b42AdwcHR6bBY= X-Received: from pjtu11.prod.google.com ([2002:a17:90a:c88b:b0:2f2:e933:8ba6]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:528a:b0:2ee:8619:210b with SMTP id 98e67ed59e1d1-2f452ec3589mr35645319a91.29.1735232856007; Thu, 26 Dec 2024 09:07:36 -0800 (PST) Date: Thu, 26 Dec 2024 09:07:03 -0800 In-Reply-To: <20241226170710.1159679-1-surenb@google.com> Mime-Version: 1.0 References: <20241226170710.1159679-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241226170710.1159679-12-surenb@google.com> Subject: [PATCH v7 11/17] refcount: introduce __refcount_{add|inc}_not_zero_limited From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Queue-Id: 388D8A000D X-Stat-Signature: r9fgnp44q7otuoe5t3za6jzfftppne4y X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1735232826-305322 X-HE-Meta: U2FsdGVkX1+bMBoDPz70GFOsWYmCzusgvNKQTttiAGRZiQUmzK1ajPoa6GnCvua4XXkq1QwW98KIQDrZEG+dZWbPDWKZB0gsgeKqyQfQX2VfmyqG0nTlQCxPVksEPC4H+mhiNEMtz1FvoTTdRRXMzD/okgKsByuqs8ait8zI6vWFDuAz29hQfZwmPowwq60GRXncWEOgPf02H+dShwAbccCa0TPa5cKjA3jS+dLwwJhTTA1KzP1fFZnmuorGANipBpM22pjI7CYv5M4I+kzL3sS1o86/P7lq2Rz2CnfHTVUI2umqb5YqpJban9DgJNxwLiEoSaffMIBECDzC68amWbYXTIckXxkdaMO3W5qdd8deQL06VtIsI6LBhtMq0mMEpK+jbUo9VpUFw+ouvFSCFn2dzL78Hw/bhrC3jnUV6BM3VVWaUSLkR/wPOwfBW7PVRgUGVXchSj3vTdvjW46mao8nboItgx4Odl/rVz3wggmWpFx07++i7BIB5HU/0xnpPEEk0rJwd7o0mqV77g9QlxK35WBHxdVilQ/cKdd85Td2PCiM+tOIDp9z4FccrDMWmvA4hyPnSdazPNrgeDRyLtDbpbv8IbvXoWj7uGfgrxq0ZV5Ti6Wsr7EZH1Y/kmxvFbKgF+VDR7aqCp9MArPS1MjsvhIY3PLYtDF5jIglbGtoCfl6tYSer4Z9g966OQzY2w36PBCYUM/0PUPWj9tLGqQ8lYbhIicVpOav6SsRIMkmegEVr/MXsBE/jB3ZoTMHTWHw2HCX4orQGtm1R4yl0n5yyDKDf6v9RB1JWZGVohWqOy2qlMHGgxr59OHiG5UV7Dqpy0ZJsbX4heaDYDYyiFt4uPCtU2reaQ4vIBZzlE6da2Wsoq90y/h5anCWnexAEqA52MFVssiwhj5NdoIpSasmEjiPjZ80CzvMDAyuqHD3ED7xxCK76HrkYbX+q0MHRcPtthjmPb9x4hIybs2 H5jo8UYD mz3WdZ2I2BjwWm6gN4h6TcT9k0I7dlB2Usnk92ueM/gRL9WpbcamClhbUcD1xEYvwD0aQUQptA5ccY9jKb1WW4yOvxS88t0BW4Y/i+oqkF9LROmj1QhjOjHfG0BogczNe62LSQ9jj4OLn4GBFdnmdQjhqCsPMmFdYmOJo+j6aiHgboQqim6FMwInD4BnD7ifeMApF656O5kLf4zsIVtS8HnT34vW5vl9TcYpSwX+PTEc/68eKLu9QfRQ9lckEhWShPkvv0EJ8wv9DhOoYebQpvBE6vVluVXMfDETaNXXE0m0svu0zUM9I6aIWNMluzMGNOFu1FvtYC62kf+T8GwBa25t6nN3fOCzq9mbGOH6G2HhdddUZAIFD42JKD3SxSXrQ/O2xG2r2PD7/zjdm6Z4149Evjul6a9IZvUAJyR95Rqa79MT2D5dkQkcsmqQqW1eN/5+/MQbL4m7uAYb5NhWRYffLPoFJxb4nZ0fGaL37cEYf+RufXMwZlKCTixG3pZuXvB102UW0ZkVaZUDEhjQKSaFeE3gjdTgHo/zHfzSlFcLOam9YBIxPuWP77m9TbQKlzgIplDOArhyTu8b+INd7VINl+w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000001, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Introduce functions to increase refcount but with a top limit above which they will fail to increase. Setting the limit to 0 indicates no limit. Signed-off-by: Suren Baghdasaryan --- include/linux/refcount.h | 20 +++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-) diff --git a/include/linux/refcount.h b/include/linux/refcount.h index 35f039ecb272..e51a49179307 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -137,13 +137,19 @@ static inline unsigned int refcount_read(const refcount_t *r) } static inline __must_check __signed_wrap -bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp) +bool __refcount_add_not_zero_limited(int i, refcount_t *r, int *oldp, + int limit) { int old = refcount_read(r); do { if (!old) break; + if (limit && old + i > limit) { + if (oldp) + *oldp = old; + return false; + } } while (!atomic_try_cmpxchg_relaxed(&r->refs, &old, old + i)); if (oldp) @@ -155,6 +161,12 @@ bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp) return old; } +static inline __must_check __signed_wrap +bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp) +{ + return __refcount_add_not_zero_limited(i, r, oldp, 0); +} + /** * refcount_add_not_zero - add a value to a refcount unless it is 0 * @i: the value to add to the refcount @@ -213,6 +225,12 @@ static inline void refcount_add(int i, refcount_t *r) __refcount_add(i, r, NULL); } +static inline __must_check bool __refcount_inc_not_zero_limited(refcount_t *r, + int *oldp, int limit) +{ + return __refcount_add_not_zero_limited(1, r, oldp, limit); +} + static inline __must_check bool __refcount_inc_not_zero(refcount_t *r, int *oldp) { return __refcount_add_not_zero(1, r, oldp); From patchwork Thu Dec 26 17:07:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13921393 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DBA2E77188 for ; Thu, 26 Dec 2024 17:07:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DD9A56B00A2; Thu, 26 Dec 2024 12:07:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D150D6B00A3; Thu, 26 Dec 2024 12:07:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B40B36B00A4; Thu, 26 Dec 2024 12:07:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 8F0356B00A2 for ; Thu, 26 Dec 2024 12:07:41 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 506ECC0BF1 for ; Thu, 26 Dec 2024 17:07:41 +0000 (UTC) X-FDA: 82937741544.29.CEAC81D Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf08.hostedemail.com (Postfix) with ESMTP id 7330216000F for ; Thu, 26 Dec 2024 17:07:11 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=WMO2ZWFq; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf08.hostedemail.com: domain of 3Wo1tZwYKCG0dfcPYMRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3Wo1tZwYKCG0dfcPYMRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735232820; a=rsa-sha256; cv=none; b=T9vDUXcYuOSBfvNWYGGT4zRpNVVI5z1s+8ASZroIy0YK2FKIVrknlrqrhovC7Lmuc8NNcp NATHbWg8OUM8Gi497hQ/XHUpeNTDXduTlJUSn3eWqV0bQZwgwjSYhwa0DPZd3CSAFExJri tc0ibxUKYI0fUA976QzXMd8s0NxRqgI= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=WMO2ZWFq; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf08.hostedemail.com: domain of 3Wo1tZwYKCG0dfcPYMRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3Wo1tZwYKCG0dfcPYMRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735232820; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=89s7s8pRCE/F5saH3vOD8dpykpc0sY2Jizt1/i2JaEM=; b=omyQg4ZcgN0PQwfTBHwQiIazgJHLIF4TyxR4sL9HF57Nae9IoyQyna5ElZ0lfXX+V1SkC6 WHMTyF693nQtkrrU2hoaHRqmC6N6/T74UjHBJjsx9B/cRkrCPp2EAy6W6qoG828Bc+7iL3 s3eyOY0eA464iFBXH1zYF+kPNTEJCAc= Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-21681a2c0d5so67861405ad.2 for ; Thu, 26 Dec 2024 09:07:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1735232858; x=1735837658; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=89s7s8pRCE/F5saH3vOD8dpykpc0sY2Jizt1/i2JaEM=; b=WMO2ZWFqav8p8ouLhgxTzEEVVrrcO6aNwycDyBJ8Uce91KivZULSLy8Sa33J9koVpf CTWiHyiVZ+0wNWc3xxRiCGZkalMpARyyLU8618emEuo5+llfuaXFcaKde0aEEmkOTv5t aZefZt0l4rrcgj+uL+ahouSJI2h379rJokYmkcGzLfrUDlOYsNYJwB+IbyINF+9NY8ql 6vg9t8TkSHuWHDNmp28kMcVA2D/IBPPGUujAulkmnXBln+CWMLqZ75Gxv0v382thjuGX 920WubH/UZmiP+MBJ1jLE0RE4qGpjArAZXX3slW3Cn7W7iW9S8umB4stvYDiGdzwVkTj lT2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735232858; x=1735837658; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=89s7s8pRCE/F5saH3vOD8dpykpc0sY2Jizt1/i2JaEM=; b=kjQIQW/ZaxF8J9dJnnLvcQEjlR3hIoX/gPqrpkEKxSU1ac0zFJ4GxlncdVDZlMc1lO kNC/FO6cPPMtmXtc0U8wAsWPAB8/XJS41VdVCfTja91EZtEesxcgUBflK6B0nPNbSMvw Ld/j1K0eJqWR7M92QEyyMILorjR4mXkkSC/ZFNBZUvPa+jKuJ8uIri8+FhvTZ6SqR6ji eiLJamYZdlqiPS2kjuuWMZWbG3PkzmFHQ+qUlhHFjUztXp7u1HDgFD00rnZQcsarpLnd 8sAZwfm6S1grWFAx2neSArio4/QcufC3CCi5yX7ARQ3C3iJWITs/GinnaAqLapUI4o3x IaSw== X-Forwarded-Encrypted: i=1; AJvYcCWeoFwRooosuh9Yk65FHPFpk175e06TJFtZKsNzKQ51s3/FzOWVO5lBv+I/72wHCMoqiVMuqhqxMQ==@kvack.org X-Gm-Message-State: AOJu0YyPagc3unVW2z0oHpXgs7Nq2GgeBcS3bE5JJ23NhrRR6iBzSrhn qRjKgnE2B93MPBvQt1Hy2fXHDXUUPcWuXucWOQDbfaftN9JfndWvDotDCVlCsf675n+Spw2qarR U5g== X-Google-Smtp-Source: AGHT+IE7Mk8eNe/JaivV7kGqKUrjWDMa+rKOhzyI+abJe9i8/UdhXPRKG981mQLisrrX+gPM4LihSIFuh4U= X-Received: from pgbbi9.prod.google.com ([2002:a05:6a02:249:b0:7fd:5835:26d1]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:6a25:b0:1d9:6c9c:75ea with SMTP id adf61e73a8af0-1e5e0448930mr37740524637.5.1735232858088; Thu, 26 Dec 2024 09:07:38 -0800 (PST) Date: Thu, 26 Dec 2024 09:07:04 -0800 In-Reply-To: <20241226170710.1159679-1-surenb@google.com> Mime-Version: 1.0 References: <20241226170710.1159679-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241226170710.1159679-13-surenb@google.com> Subject: [PATCH v7 12/17] mm: replace vm_lock and detached flag with a reference count From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 7330216000F X-Stat-Signature: nc453np996kgakky5cpddftnpiy5ugno X-Rspam-User: X-HE-Tag: 1735232831-225179 X-HE-Meta: U2FsdGVkX1+F/mTtBdpnVOiURDNriVgDLaLvY6hGVRItB4kljwj99pfH/Ed2TH4AtsQY56Udg4EJA2eFAxIv8hNJoIY0eCXhUlbZclmlemVrL5rmb4ACWgruhSVTE40hpHpl8A0yDcl9MGTdpfVvLRs/EjEHdsjrdH2WjL1gCnpGyto7uIjrwJtdRCG3gNRixnTtXOcXlSX+gshH62QJUrxfXCns0KUusBe2BwnVHkt2eG5yoCaHDPJGAgIS2ctjI9Cx8SCVmVyZC6aCGzOrw+H9ng4bqSkfru2w2OKgaHqJiPy5hAj0WBq+7eM84KQqxOU9tDrvD1DA7Ef/b5UTUuLGSvVv4R/++NJ+nbIyO5XII5n9sTCJEGBvwDPXsq7MXd9NYSoqycEwUChLUrM6M6EouanzwCMnIOHF0mzDMNLc+Wrxc1SCYy7ZIz0L0RfLiFQhLX3yWVKrEXEWSYXe+iUVgME80aBYYj2fRivq3DLF70Uj+YY2rhTK4tKx0qiqvSwJ4qVVrYmg+XtNxqbYv5ZVDmN1sj60MqMs4Vk026pm59zLLoNmd7Tlr+huFaS3boSoXtHT6/YnRcqPHGcdSjmY11u1ElDfHiF3+lOoa6wKQmaJ3AkwB326SccnIh37XWWbNvD5Zh1IGJtZIALftZN9tizSRB+OfBnM68vZCOA73QxTOnIF1UkxWYgVfYdf8IaaaxV0iruSE0LTaoFwNi6G4JtYeZqdQEx1BnGRYcAud0SVxqI1oil++TRqgP5hlu2qwHGjoxjF2xXiBXfZfDhc6Kfbdr93gK0N2hl/D0OMfsyIKSyzROcrRWz/Jr6OsmGpWadSBFhfWvx5NjE/QsSel4dLuMnVWvwGBtufzustwMb7pH5rnploKy9Rr3tPuzlGgwpodHk0kKI29uwgJKX9Agnj/E39OH13GraW8r/lRs1RuUAgmcTnj2qM7WdX6V6vUTkfbZimpK1Koe8 dYalzyNX 6XOgCXKUBdkvFwzvMx3fJ2zhMh9Ji8OdTgoS8e3aPeCU52EW0engoHdP6sfeJcnlWITOn913b8wVsNasxTNI2sakb4B5ZUpGxtI0tU21LZ1UwgBL1FWWcDnfhel+5PhniJ8tHIv+IyGrBwu9UZ1he+PZ8CYKT5ruCR1dGgkQZr2rHWJWI7l7xR7in5QRxINwRcSnOrW/FaS9nZ70le1oUT02kikfHcFucI3x3/AVaFjIReEySDwBGoedWJL0vnb2GdLmhFXpN2L7puGAXjgVphXh+uJ92M5wwp6twvMw1e5PiNrdBpt+zVrL3ykpcF/DYNGM9/idHPjTV79hzX2YkC2fXZ0Qr+rSzRUkUsQZ0O5PHTPBjT42OJ+fo6VbF/HuJTmFF/sIp4wAyH4jNtEfCkSntF/ioPyeto78SryZpFJeH504P7ipT76vFqvJhUE+K1aqH9sy/7kqh4KhZFBh1UFtUkc18EtUgckQZUeK1RHj+niG+k1R3yKYXx4YC5rMWRBRISwWxtNZSXwnwS1fJjno7Nd0zemWuTLGR5GGYes9MxIpPlRkwgaUvt+nKyfDIhxKI1LmZ6SuKEWKsIl9pe6vUCXjduf+xeP8vnHBoeB/ohiuv5lEGbRZCDHGU7ADVEjxN1kwfS6fr3EZ+SnV/VNwVqg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: rw_semaphore is a sizable structure of 40 bytes and consumes considerable space for each vm_area_struct. However vma_lock has two important specifics which can be used to replace rw_semaphore with a simpler structure: 1. Readers never wait. They try to take the vma_lock and fall back to mmap_lock if that fails. 2. Only one writer at a time will ever try to write-lock a vma_lock because writers first take mmap_lock in write mode. Because of these requirements, full rw_semaphore functionality is not needed and we can replace rw_semaphore and the vma->detached flag with a refcount (vm_refcnt). When vma is in detached state, vm_refcnt is 0 and only a call to vma_mark_attached() can take it out of this state. Note that unlike before, now we enforce both vma_mark_attached() and vma_mark_detached() to be done only after vma has been write-locked. vma_mark_attached() changes vm_refcnt to 1 to indicate that it has been attached to the vma tree. When a reader takes read lock, it increments vm_refcnt, unless the top usable bit of vm_refcnt (0x40000000) is set, indicating presence of a writer. When writer takes write lock, it both increments vm_refcnt and sets the top usable bit to indicate its presence. If there are readers, writer will wait using newly introduced mm->vma_writer_wait. Since all writers take mmap_lock in write mode first, there can be only one writer at a time. The last reader to release the lock will signal the writer to wake up. refcount might overflow if there are many competing readers, in which case read-locking will fail. Readers are expected to handle such failures. Suggested-by: Peter Zijlstra Suggested-by: Matthew Wilcox Signed-off-by: Suren Baghdasaryan --- include/linux/mm.h | 100 +++++++++++++++++++++---------- include/linux/mm_types.h | 22 ++++--- kernel/fork.c | 13 ++-- mm/init-mm.c | 1 + mm/memory.c | 68 +++++++++++++++++---- tools/testing/vma/linux/atomic.h | 5 ++ tools/testing/vma/vma_internal.h | 66 +++++++++++--------- 7 files changed, 185 insertions(+), 90 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ea4c4228b125..99f4720d7e51 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -32,6 +32,7 @@ #include #include #include +#include struct mempolicy; struct anon_vma; @@ -697,12 +698,34 @@ static inline void vma_numab_state_free(struct vm_area_struct *vma) {} #endif /* CONFIG_NUMA_BALANCING */ #ifdef CONFIG_PER_VMA_LOCK -static inline void vma_lock_init(struct vm_area_struct *vma) +static inline void vma_lockdep_init(struct vm_area_struct *vma) { - init_rwsem(&vma->vm_lock.lock); +#ifdef CONFIG_DEBUG_LOCK_ALLOC + static struct lock_class_key lockdep_key; + + lockdep_init_map(&vma->vmlock_dep_map, "vm_lock", &lockdep_key, 0); +#endif +} + +static inline void vma_init_lock(struct vm_area_struct *vma, bool reset_refcnt) +{ + if (reset_refcnt) + refcount_set(&vma->vm_refcnt, 0); vma->vm_lock_seq = UINT_MAX; } +static inline void vma_refcount_put(struct vm_area_struct *vma) +{ + int refcnt; + + if (!__refcount_dec_and_test(&vma->vm_refcnt, &refcnt)) { + rwsem_release(&vma->vmlock_dep_map, _RET_IP_); + + if (refcnt & VMA_LOCK_OFFSET) + rcuwait_wake_up(&vma->vm_mm->vma_writer_wait); + } +} + /* * Try to read-lock a vma. The function is allowed to occasionally yield false * locked result to avoid performance overhead, in which case we fall back to @@ -710,6 +733,8 @@ static inline void vma_lock_init(struct vm_area_struct *vma) */ static inline bool vma_start_read(struct vm_area_struct *vma) { + int oldcnt; + /* * Check before locking. A race might cause false locked result. * We can use READ_ONCE() for the mm_lock_seq here, and don't need @@ -720,13 +745,20 @@ static inline bool vma_start_read(struct vm_area_struct *vma) if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence)) return false; - if (unlikely(down_read_trylock(&vma->vm_lock.lock) == 0)) + + rwsem_acquire_read(&vma->vmlock_dep_map, 0, 0, _RET_IP_); + /* Limit at VMA_REF_LIMIT to leave one count for a writer */ + if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt, + VMA_REF_LIMIT))) { + rwsem_release(&vma->vmlock_dep_map, _RET_IP_); return false; + } + lock_acquired(&vma->vmlock_dep_map, _RET_IP_); /* - * Overflow might produce false locked result. + * Overflow of vm_lock_seq/mm_lock_seq might produce false locked result. * False unlocked result is impossible because we modify and check - * vma->vm_lock_seq under vma->vm_lock protection and mm->mm_lock_seq + * vma->vm_lock_seq under vma->vm_refcnt protection and mm->mm_lock_seq * modification invalidates all existing locks. * * We must use ACQUIRE semantics for the mm_lock_seq so that if we are @@ -734,10 +766,12 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * after it has been unlocked. * This pairs with RELEASE semantics in vma_end_write_all(). */ - if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { - up_read(&vma->vm_lock.lock); + if (unlikely(oldcnt & VMA_LOCK_OFFSET || + vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { + vma_refcount_put(vma); return false; } + return true; } @@ -749,8 +783,17 @@ static inline bool vma_start_read(struct vm_area_struct *vma) */ static inline bool vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass) { + int oldcnt; + mmap_assert_locked(vma->vm_mm); - down_read_nested(&vma->vm_lock.lock, subclass); + rwsem_acquire_read(&vma->vmlock_dep_map, subclass, 0, _RET_IP_); + /* Limit at VMA_REF_LIMIT to leave one count for a writer */ + if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt, + VMA_REF_LIMIT))) { + rwsem_release(&vma->vmlock_dep_map, _RET_IP_); + return false; + } + lock_acquired(&vma->vmlock_dep_map, _RET_IP_); return true; } @@ -762,15 +805,13 @@ static inline bool vma_start_read_locked_nested(struct vm_area_struct *vma, int */ static inline bool vma_start_read_locked(struct vm_area_struct *vma) { - mmap_assert_locked(vma->vm_mm); - down_read(&vma->vm_lock.lock); - return true; + return vma_start_read_locked_nested(vma, 0); } static inline void vma_end_read(struct vm_area_struct *vma) { rcu_read_lock(); /* keeps vma alive till the end of up_read */ - up_read(&vma->vm_lock.lock); + vma_refcount_put(vma); rcu_read_unlock(); } @@ -813,36 +854,33 @@ static inline void vma_assert_write_locked(struct vm_area_struct *vma) static inline void vma_assert_locked(struct vm_area_struct *vma) { - if (!rwsem_is_locked(&vma->vm_lock.lock)) + if (refcount_read(&vma->vm_refcnt) <= 1) vma_assert_write_locked(vma); } +/* + * WARNING: to avoid racing with vma_mark_attached()/vma_mark_detached(), these + * assertions should be made either under mmap_write_lock or when the object + * has been isolated under mmap_write_lock, ensuring no competing writers. + */ static inline void vma_assert_attached(struct vm_area_struct *vma) { - VM_BUG_ON_VMA(vma->detached, vma); + VM_BUG_ON_VMA(!refcount_read(&vma->vm_refcnt), vma); } static inline void vma_assert_detached(struct vm_area_struct *vma) { - VM_BUG_ON_VMA(!vma->detached, vma); + VM_BUG_ON_VMA(refcount_read(&vma->vm_refcnt), vma); } static inline void vma_mark_attached(struct vm_area_struct *vma) { - vma->detached = false; -} - -static inline void vma_mark_detached(struct vm_area_struct *vma) -{ - /* When detaching vma should be write-locked */ vma_assert_write_locked(vma); - vma->detached = true; + vma_assert_detached(vma); + refcount_set(&vma->vm_refcnt, 1); } -static inline bool is_vma_detached(struct vm_area_struct *vma) -{ - return vma->detached; -} +void vma_mark_detached(struct vm_area_struct *vma); static inline void release_fault_lock(struct vm_fault *vmf) { @@ -865,7 +903,8 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, #else /* CONFIG_PER_VMA_LOCK */ -static inline void vma_lock_init(struct vm_area_struct *vma) {} +static inline void vma_lockdep_init(struct vm_area_struct *vma) {} +static inline void vma_init_lock(struct vm_area_struct *vma, bool reset_refcnt) {} static inline bool vma_start_read(struct vm_area_struct *vma) { return false; } static inline void vma_end_read(struct vm_area_struct *vma) {} @@ -908,12 +947,9 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_mm = mm; vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); -#ifdef CONFIG_PER_VMA_LOCK - /* vma is not locked, can't use vma_mark_detached() */ - vma->detached = true; -#endif vma_numab_state_init(vma); - vma_lock_init(vma); + vma_lockdep_init(vma); + vma_init_lock(vma, false); } /* Use when VMA is not part of the VMA tree and needs no locking */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 6573d95f1d1e..b5312421dec6 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -19,6 +19,7 @@ #include #include #include +#include #include @@ -629,9 +630,8 @@ static inline struct anon_vma_name *anon_vma_name_alloc(const char *name) } #endif -struct vma_lock { - struct rw_semaphore lock; -}; +#define VMA_LOCK_OFFSET 0x40000000 +#define VMA_REF_LIMIT (VMA_LOCK_OFFSET - 2) struct vma_numab_state { /* @@ -709,19 +709,13 @@ struct vm_area_struct { }; #ifdef CONFIG_PER_VMA_LOCK - /* - * Flag to indicate areas detached from the mm->mm_mt tree. - * Unstable RCU readers are allowed to read this. - */ - bool detached; - /* * Can only be written (using WRITE_ONCE()) while holding both: * - mmap_lock (in write mode) - * - vm_lock->lock (in write mode) + * - vm_refcnt bit at VMA_LOCK_OFFSET is set * Can be read reliably while holding one of: * - mmap_lock (in read or write mode) - * - vm_lock->lock (in read or write mode) + * - vm_refcnt bit at VMA_LOCK_OFFSET is set or vm_refcnt > 1 * Can be read unreliably (using READ_ONCE()) for pessimistic bailout * while holding nothing (except RCU to keep the VMA struct allocated). * @@ -784,7 +778,10 @@ struct vm_area_struct { struct vm_userfaultfd_ctx vm_userfaultfd_ctx; #ifdef CONFIG_PER_VMA_LOCK /* Unstable RCU readers are allowed to read this. */ - struct vma_lock vm_lock ____cacheline_aligned_in_smp; + refcount_t vm_refcnt ____cacheline_aligned_in_smp; +#ifdef CONFIG_DEBUG_LOCK_ALLOC + struct lockdep_map vmlock_dep_map; +#endif #endif } __randomize_layout; @@ -919,6 +916,7 @@ struct mm_struct { * by mmlist_lock */ #ifdef CONFIG_PER_VMA_LOCK + struct rcuwait vma_writer_wait; /* * This field has lock-like semantics, meaning it is sometimes * accessed with ACQUIRE/RELEASE semantics. diff --git a/kernel/fork.c b/kernel/fork.c index d4c75428ccaf..7a0800d48112 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -463,12 +463,8 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) * will be reinitialized. */ data_race(memcpy(new, orig, sizeof(*new))); - vma_lock_init(new); + vma_init_lock(new, true); INIT_LIST_HEAD(&new->anon_vma_chain); -#ifdef CONFIG_PER_VMA_LOCK - /* vma is not locked, can't use vma_mark_detached() */ - new->detached = true; -#endif vma_numab_state_init(new); dup_anon_vma_name(orig, new); @@ -477,6 +473,8 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) void __vm_area_free(struct vm_area_struct *vma) { + /* The vma should be detached while being destroyed. */ + vma_assert_detached(vma); vma_numab_state_free(vma); free_anon_vma_name(vma); kmem_cache_free(vm_area_cachep, vma); @@ -488,8 +486,6 @@ static void vm_area_free_rcu_cb(struct rcu_head *head) struct vm_area_struct *vma = container_of(head, struct vm_area_struct, vm_rcu); - /* The vma should not be locked while being destroyed. */ - VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock.lock), vma); __vm_area_free(vma); } #endif @@ -1223,6 +1219,9 @@ static inline void mmap_init_lock(struct mm_struct *mm) { init_rwsem(&mm->mmap_lock); mm_lock_seqcount_init(mm); +#ifdef CONFIG_PER_VMA_LOCK + rcuwait_init(&mm->vma_writer_wait); +#endif } static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, diff --git a/mm/init-mm.c b/mm/init-mm.c index 6af3ad675930..4600e7605cab 100644 --- a/mm/init-mm.c +++ b/mm/init-mm.c @@ -40,6 +40,7 @@ struct mm_struct init_mm = { .arg_lock = __SPIN_LOCK_UNLOCKED(init_mm.arg_lock), .mmlist = LIST_HEAD_INIT(init_mm.mmlist), #ifdef CONFIG_PER_VMA_LOCK + .vma_writer_wait = __RCUWAIT_INITIALIZER(init_mm.vma_writer_wait), .mm_lock_seq = SEQCNT_ZERO(init_mm.mm_lock_seq), #endif .user_ns = &init_user_ns, diff --git a/mm/memory.c b/mm/memory.c index 236fdecd44d6..2def47b5dff0 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6328,9 +6328,39 @@ struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm, #endif #ifdef CONFIG_PER_VMA_LOCK +static inline bool __vma_enter_locked(struct vm_area_struct *vma, unsigned int tgt_refcnt) +{ + /* + * If vma is detached then only vma_mark_attached() can raise the + * vm_refcnt. mmap_write_lock prevents racing with vma_mark_attached(). + */ + if (!refcount_inc_not_zero(&vma->vm_refcnt)) + return false; + + rwsem_acquire(&vma->vmlock_dep_map, 0, 0, _RET_IP_); + /* vma is attached, set the writer present bit */ + refcount_add(VMA_LOCK_OFFSET, &vma->vm_refcnt); + rcuwait_wait_event(&vma->vm_mm->vma_writer_wait, + refcount_read(&vma->vm_refcnt) == tgt_refcnt, + TASK_UNINTERRUPTIBLE); + lock_acquired(&vma->vmlock_dep_map, _RET_IP_); + + return true; +} + +static inline void __vma_exit_locked(struct vm_area_struct *vma, bool *detached) +{ + *detached = refcount_sub_and_test(VMA_LOCK_OFFSET + 1, &vma->vm_refcnt); + rwsem_release(&vma->vmlock_dep_map, _RET_IP_); +} + void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq) { - down_write(&vma->vm_lock.lock); + bool locked; + + /* Wait until refcnt is (VMA_LOCK_OFFSET + 2) => attached with no readers */ + locked = __vma_enter_locked(vma, VMA_LOCK_OFFSET + 2); + /* * We should use WRITE_ONCE() here because we can have concurrent reads * from the early lockless pessimistic check in vma_start_read(). @@ -6338,10 +6368,36 @@ void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq) * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. */ WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); - up_write(&vma->vm_lock.lock); + + if (locked) { + bool detached; + + __vma_exit_locked(vma, &detached); + VM_BUG_ON_VMA(detached, vma); /* vma should remain attached */ + } } EXPORT_SYMBOL_GPL(__vma_start_write); +void vma_mark_detached(struct vm_area_struct *vma) +{ + vma_assert_write_locked(vma); + vma_assert_attached(vma); + + /* We are the only writer, so no need to use vma_refcount_put(). */ + if (unlikely(!refcount_dec_and_test(&vma->vm_refcnt))) { + /* + * Wait until refcnt is (VMA_LOCK_OFFSET + 1) => detached with + * no readers + */ + if (__vma_enter_locked(vma, VMA_LOCK_OFFSET + 1)) { + bool detached; + + __vma_exit_locked(vma, &detached); + VM_BUG_ON_VMA(!detached, vma); + } + } +} + /* * Lookup and lock a VMA under RCU protection. Returned VMA is guaranteed to be * stable and not isolated. If the VMA is not found or is being modified the @@ -6354,7 +6410,6 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, struct vm_area_struct *vma; rcu_read_lock(); -retry: vma = mas_walk(&mas); if (!vma) goto inval; @@ -6362,13 +6417,6 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, if (!vma_start_read(vma)) goto inval; - /* Check if the VMA got isolated after we found it */ - if (is_vma_detached(vma)) { - vma_end_read(vma); - count_vm_vma_lock_event(VMA_LOCK_MISS); - /* The area was replaced with another one */ - goto retry; - } /* * At this point, we have a stable reference to a VMA: The VMA is * locked and we know it hasn't already been isolated. diff --git a/tools/testing/vma/linux/atomic.h b/tools/testing/vma/linux/atomic.h index e01f66f98982..2e2021553196 100644 --- a/tools/testing/vma/linux/atomic.h +++ b/tools/testing/vma/linux/atomic.h @@ -9,4 +9,9 @@ #define atomic_set(x, y) do {} while (0) #define U8_MAX UCHAR_MAX +#ifndef atomic_cmpxchg_relaxed +#define atomic_cmpxchg_relaxed uatomic_cmpxchg +#define atomic_cmpxchg_release uatomic_cmpxchg +#endif /* atomic_cmpxchg_relaxed */ + #endif /* _LINUX_ATOMIC_H */ diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index 2a624f9304da..1e8cd2f013fa 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -25,7 +25,7 @@ #include #include #include -#include +#include extern unsigned long stack_guard_gap; #ifdef CONFIG_MMU @@ -132,10 +132,6 @@ typedef __bitwise unsigned int vm_fault_t; */ #define pr_warn_once pr_err -typedef struct refcount_struct { - atomic_t refs; -} refcount_t; - struct kref { refcount_t refcount; }; @@ -228,15 +224,12 @@ struct mm_struct { unsigned long def_flags; }; -struct vma_lock { - struct rw_semaphore lock; -}; - - struct file { struct address_space *f_mapping; }; +#define VMA_LOCK_OFFSET 0x40000000 + struct vm_area_struct { /* The first cache line has the info for VMA tree walking. */ @@ -264,16 +257,13 @@ struct vm_area_struct { }; #ifdef CONFIG_PER_VMA_LOCK - /* Flag to indicate areas detached from the mm->mm_mt tree */ - bool detached; - /* * Can only be written (using WRITE_ONCE()) while holding both: * - mmap_lock (in write mode) - * - vm_lock.lock (in write mode) + * - vm_refcnt bit at VMA_LOCK_OFFSET is set * Can be read reliably while holding one of: * - mmap_lock (in read or write mode) - * - vm_lock.lock (in read or write mode) + * - vm_refcnt bit at VMA_LOCK_OFFSET is set or vm_refcnt > 1 * Can be read unreliably (using READ_ONCE()) for pessimistic bailout * while holding nothing (except RCU to keep the VMA struct allocated). * @@ -282,7 +272,6 @@ struct vm_area_struct { * slowpath. */ unsigned int vm_lock_seq; - struct vma_lock vm_lock; #endif /* @@ -335,6 +324,10 @@ struct vm_area_struct { struct vma_numab_state *numab_state; /* NUMA Balancing state */ #endif struct vm_userfaultfd_ctx vm_userfaultfd_ctx; +#ifdef CONFIG_PER_VMA_LOCK + /* Unstable RCU readers are allowed to read this. */ + refcount_t vm_refcnt; +#endif } __randomize_layout; struct vm_fault {}; @@ -459,23 +452,41 @@ static inline struct vm_area_struct *vma_next(struct vma_iterator *vmi) return mas_find(&vmi->mas, ULONG_MAX); } -static inline void vma_lock_init(struct vm_area_struct *vma) +/* + * WARNING: to avoid racing with vma_mark_attached()/vma_mark_detached(), these + * assertions should be made either under mmap_write_lock or when the object + * has been isolated under mmap_write_lock, ensuring no competing writers. + */ +static inline void vma_assert_attached(struct vm_area_struct *vma) { - init_rwsem(&vma->vm_lock.lock); - vma->vm_lock_seq = UINT_MAX; + VM_BUG_ON_VMA(!refcount_read(&vma->vm_refcnt), vma); } -static inline void vma_mark_attached(struct vm_area_struct *vma) +static inline void vma_assert_detached(struct vm_area_struct *vma) { - vma->detached = false; + VM_BUG_ON_VMA(refcount_read(&vma->vm_refcnt), vma); } static inline void vma_assert_write_locked(struct vm_area_struct *); +static inline void vma_mark_attached(struct vm_area_struct *vma) +{ + vma_assert_write_locked(vma); + vma_assert_detached(vma); + refcount_set(&vma->vm_refcnt, 1); +} + static inline void vma_mark_detached(struct vm_area_struct *vma) { - /* When detaching vma should be write-locked */ vma_assert_write_locked(vma); - vma->detached = true; + vma_assert_attached(vma); + + /* We are the only writer, so no need to use vma_refcount_put(). */ + if (unlikely(!refcount_dec_and_test(&vma->vm_refcnt))) { + /* + * Reader must have temporarily raised vm_refcnt but it will + * drop it without using the vma since vma is write-locked. + */ + } } extern const struct vm_operations_struct vma_dummy_vm_ops; @@ -488,9 +499,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_mm = mm; vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); - /* vma is not locked, can't use vma_mark_detached() */ - vma->detached = true; - vma_lock_init(vma); + vma->vm_lock_seq = UINT_MAX; } static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) @@ -513,10 +522,9 @@ static inline struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) return NULL; memcpy(new, orig, sizeof(*new)); - vma_lock_init(new); + refcount_set(&new->vm_refcnt, 0); + new->vm_lock_seq = UINT_MAX; INIT_LIST_HEAD(&new->anon_vma_chain); - /* vma is not locked, can't use vma_mark_detached() */ - new->detached = true; return new; } From patchwork Thu Dec 26 17:07:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13921394 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D415E7718E for ; Thu, 26 Dec 2024 17:07:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C7B436B00A6; Thu, 26 Dec 2024 12:07:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BDBCA6B00A5; Thu, 26 Dec 2024 12:07:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A2D7F6B00A6; Thu, 26 Dec 2024 12:07:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 83FDA6B00A4 for ; Thu, 26 Dec 2024 12:07:43 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 348651A11FF for ; Thu, 26 Dec 2024 17:07:43 +0000 (UTC) X-FDA: 82937740746.22.77431A9 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf27.hostedemail.com (Postfix) with ESMTP id 019F840019 for ; Thu, 26 Dec 2024 17:06:55 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=h2AILnRL; spf=pass (imf27.hostedemail.com: domain of 3XI1tZwYKCG8fheRaOTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3XI1tZwYKCG8fheRaOTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735232841; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZbUgy+N8PHmQQlIfNLhjvtNZcxZdtPERK00HNSuusW4=; b=rW/FXDeCkHVv5M/ayDVqzo3uI1tllEqWLSfXiXJvvgVDgIx92jj4mGSyIW7YxOi/BCswLG 7z6VZEHBBWLDdLLrM0yqIrBD3Ffk6Z4z0YisKMlqnR2GL+i/OIYM6FQShhE/GgDIRuj+/r O00BoEjpYDJOzLhjdpuIFfK0YBop2Hc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735232841; a=rsa-sha256; cv=none; b=lkVPyzhln9d8yBiwBeWGPkpHZGFCxIucfx1GvNnQPx9/EnMKhm5owtOq6m5ASWL7hE6jh4 ZRJfNrM66xPcKwygiqy0CiBhwrJoR+vjkJO9idFwaGiqchUNn0rMVM0u8N+vky9B63fVrf JT9ypLDx6tdlPUpbo6c74/W5ocYAA5I= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=h2AILnRL; spf=pass (imf27.hostedemail.com: domain of 3XI1tZwYKCG8fheRaOTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3XI1tZwYKCG8fheRaOTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2efa0eb9dacso8311620a91.1 for ; Thu, 26 Dec 2024 09:07:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1735232860; x=1735837660; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZbUgy+N8PHmQQlIfNLhjvtNZcxZdtPERK00HNSuusW4=; b=h2AILnRLH5Z1safT9oAtHv1a5eNXsiIZD9TPXkwIsX60OWB/HMxzmgRXXbUvv4Ct56 6rpjwBubnr32h4GvTTs9OgscgG8C/NKa4eQ5P4LHw+Jiob55JU+mMPGMC4n0o4I+uXsV IlMc8vujN2XZ2A0Ua358kVFgMKN+MS00GS/gL2dAnmSgmmqtWj4bSZCO5Yb3uKLKA58J oAFMSzShhltdsJHw5XsnCL4zg1y+SKUh0e7CfyXCtP9/4dlxGLCU2RsutknEKIx68IgI naSnTFUmHIbBhbDzW0s1TKolOqvun8nltT/grJQL4wB79yJuOARr2PoyLQFEJasMx9PI mwiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735232860; x=1735837660; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZbUgy+N8PHmQQlIfNLhjvtNZcxZdtPERK00HNSuusW4=; b=WUIE5fAAyVrNVzoE3LnYfnZmNOpiI3CFiDvEljAUMKU14QbxLHEkfujEZp9r6mFIHn dNEjJf3g6UY3W2Tm0c7DY5Erev4uPJXlM8IdGE5pis57VC0BPWY3cDUFQa7S53i+7am5 p2DbtQuYncUcSycrpf/JNutFS0uQyz8gLtl898LChgePWNBufA+O4lZUuP+1Bnur1Jgm I27IlY5BxTReybqJNoWfvs4EECNOJoHDaEipsIXFfGqEClAOTHcAGXVqBevGZ7YSK+Of w8b5eFvj4iGJTm9KqGYGd3ju6OErZyWzalVbMYOaTTkWEhhMNoBCxUZdmvX7iElpx6b0 /cnQ== X-Forwarded-Encrypted: i=1; AJvYcCWW+vYnQ5R4UooOeuSINbMc7fIbZ5yS4fAcBT3AssZtexeYe8mw0YBg508ZkuiHDZO2EUnBBYQzJg==@kvack.org X-Gm-Message-State: AOJu0YyFONAeSglBQ5ufnLI63WOnRB76Z7lEWNn/egl2SKYfgWTzph/G sRUGpPxbfdM5Izlah3A1J0G4l2Bj50OgRFnrEDivV+l2s3MaVxrt8cmldMQ+i8PyewiKxP+vCeU Apg== X-Google-Smtp-Source: AGHT+IEG5TzQ5WphA3eoGz/MbiJJmFDDNhcID7zswYwLoeBJDVpj8ipi2aVpjRu++rbXJdGfCBLbRXGohaw= X-Received: from pfbdc3.prod.google.com ([2002:a05:6a00:35c3:b0:725:cd3b:326c]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:8412:b0:1e1:a671:7122 with SMTP id adf61e73a8af0-1e5e044635dmr39159145637.2.1735232860104; Thu, 26 Dec 2024 09:07:40 -0800 (PST) Date: Thu, 26 Dec 2024 09:07:05 -0800 In-Reply-To: <20241226170710.1159679-1-surenb@google.com> Mime-Version: 1.0 References: <20241226170710.1159679-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241226170710.1159679-14-surenb@google.com> Subject: [PATCH v7 13/17] mm/debug: print vm_refcnt state when dumping the vma From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 019F840019 X-Stat-Signature: apybebo1xwz3c46n3qu965jfqncwfc1p X-Rspam-User: X-HE-Tag: 1735232815-676348 X-HE-Meta: U2FsdGVkX1+FME2n29F948tdwWNmoN4Z6Q4NtMXhlmXmypRJYLckgxL6kKs0CpBvFB+6sJWQi1YP3a3GaKYwIa05JdPJGOG4GhBy2ppxhBzl4d3Eem/n+RY8WY+41VVXOtrVUJbh4bYUca/hcd28FDp3WaLPntIJ9LwaKqnYcrl5D+vZnvFhZwVvyqTXtF52SK3UHdfnomVdJhHmxO9G2Q0sRsjYCCCC+A4OPrQaNQkyTKwGLsKDGnfCqxmanelKjJ84+3aINymjtvq3mc0rIdZveTuvi+tYhgofEfkTo+0jWdPuBuJ0H3t6EWtogJ6LHmZeO7c2bfgfxBOZgBdlZeTvZdlKBPNJdqT0CvxFlIh01P4na3NGOLVWhPiPbwO5LhjmQX9G5khwcMLJuLvF9b4VGOBCkrT3YE+fZJKrrulFEiub4yn7owmMGjVf0yz49IVGIcnA1GJ8892obZofjwFsJO3Ifd8l48pio77ixW7p2a/8pVLtUf3bH7+h9h0MZMoAdj3rfjw6ZCwuL+UKYPkJnLrMPIM0raupY0kHj63D4c+qmfDtj9t7He+5RMS4IdwV+2SnhNOEKl6cgpGcSaCg+uN7A4Iozwrm84Y+pY9oOWLT1dAD0+Z/Jxlj5uvsPL8WUezCICUxNcq+7m5LjaoIgkk91KqQ7y9b2VFIgdNXLFZ5qmuEgmgjafp09Ddevx3xLPIdAufErIF7btLOPvWSJmyJdIIzTwee7uNlGfYJCoGH7HB33/yGF+W6PvOSMTVPuXdFvbeagh3+z753/Owip5fNUJeGym80IMkHWcD8XSBadeQzNmR+1lJddblZ4iuKZYf4BUIGmUyI/TySkonZ/FWJt9zs0WY1qj4qeXojNJEoHYv90bamzvTuEzedz1K+QJEZscaaHLqextjb7GnbiT7YF3BUJn5GoFCv4TA4EOkG9r9qqUUx00SFWoRLaz3El9+RhazfqdIB0Sg js1cDcZQ Q+rqueYeHENpYjFK0RZje+0qjgOgJplsAkruZOJwNqT4ITMhfP6tCuBGDPPIFgoNnoRKkB3md2LZmHQXOm1lpxVS8Ctz555GEop87wBkMLnRP8jQZKRqMRMgUS4dIuHRGSUfis5tEPIqZo+8zo2/9QewEOKAldnXdYH+5z4DFTgpf02XHOs58Xfg345sQNifNgCUCAmPIPmjtCP8WVv9uppzXhTP8bQ/g7MGvIGYzWY8qYMGg8fLWacx222d/pH1VZ7TxPQvWC0QTmlQwtkFqfMAO6sGFpLQUjJb8YBK7dO+aCcb/kyPVLGS2U84Dh0p4RaJDKtKrijD4SqCiIGGmIlEAHDwXR0V512wWtGRzwOUbIscArzk0mi13iiQmwhO43yItZDoZEgUzZi4uPBs0CJAI83pZFAY4xhxs0OpvwSojy4UrvFONyLDgYYVvkRh7boK+ZiXrrm6LfB0N96dEq2mVKSsxtggZygvAOrwcpNmnsxlC/vhdJrc+81jXthvthilYtck98MhOYB5PlLLFIgYfohql8cQ5lCOe3GoQOkst3QDHyaBKOq0WIIU4WpuWk+Fjl+hMGfJd/PsolFKGUYJV9g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: vm_refcnt encodes a number of useful states: - whether vma is attached or detached - the number of current vma readers - presence of a vma writer Let's include it in the vma dump. Signed-off-by: Suren Baghdasaryan --- mm/debug.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/debug.c b/mm/debug.c index 95b6ab809c0e..68b3ba3cf603 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -181,12 +181,12 @@ void dump_vma(const struct vm_area_struct *vma) pr_emerg("vma %px start %px end %px mm %px\n" "prot %lx anon_vma %px vm_ops %px\n" "pgoff %lx file %px private_data %px\n" - "flags: %#lx(%pGv)\n", + "flags: %#lx(%pGv) refcnt %x\n", vma, (void *)vma->vm_start, (void *)vma->vm_end, vma->vm_mm, (unsigned long)pgprot_val(vma->vm_page_prot), vma->anon_vma, vma->vm_ops, vma->vm_pgoff, vma->vm_file, vma->vm_private_data, - vma->vm_flags, &vma->vm_flags); + vma->vm_flags, &vma->vm_flags, refcount_read(&vma->vm_refcnt)); } EXPORT_SYMBOL(dump_vma); From patchwork Thu Dec 26 17:07:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13921395 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83676E7718E for ; Thu, 26 Dec 2024 17:07:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D63BD6B00A4; Thu, 26 Dec 2024 12:07:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CEB746B00A5; Thu, 26 Dec 2024 12:07:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B67766B00A7; Thu, 26 Dec 2024 12:07:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 92E3D6B00A4 for ; Thu, 26 Dec 2024 12:07:45 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 46DB781426 for ; Thu, 26 Dec 2024 17:07:45 +0000 (UTC) X-FDA: 82937740872.12.A84A1F6 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf01.hostedemail.com (Postfix) with ESMTP id 907C040005 for ; Thu, 26 Dec 2024 17:07:11 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=F1BoRHa+; spf=pass (imf01.hostedemail.com: domain of 3Xo1tZwYKCHEhjgTcQVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3Xo1tZwYKCHEhjgTcQVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735232844; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xUsbn4qH8t1xPQtCnq4fSt0rQiZva/B8CbRLcxSWyZc=; b=M6j7nEwzEHDcOlG7MoLwxr7zORGHlxyoOlUvAILw0nx8Tn3fr6Y+98cHtHZYdEllXxFkCh zxdlp4p7lqyKRk8qPID9j+vDG/40ILIXUCllUtvWPQFvBjgJ+207XNnjrP/JqZymlXp2e6 Ty1AKjpeeBIGw77eDrCcRiVd7KxXG5Q= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=F1BoRHa+; spf=pass (imf01.hostedemail.com: domain of 3Xo1tZwYKCHEhjgTcQVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3Xo1tZwYKCHEhjgTcQVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735232844; a=rsa-sha256; cv=none; b=DcSZaNe3xry60y6WdfgawVFv1hs9y1bt93b3dLAyD1isBHjh9/RMJ1Fa4MkUkdqLm4Tbgo fRw8wB1mNDAuhQkDzlOwTLVj+oj+huqXJ97uEUrwX+4JRVgIB0zxAd03MganmUH+31CLoO 7U0pX81aeo5NmPKYIMy+jSq0qCVDY64= Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2166a1a5cc4so77355785ad.3 for ; Thu, 26 Dec 2024 09:07:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1735232862; x=1735837662; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xUsbn4qH8t1xPQtCnq4fSt0rQiZva/B8CbRLcxSWyZc=; b=F1BoRHa+7PXsjwO6+1/9VbJzE2cARcyTfQ/dqzURhvwVGLxYPddH+Qo8LdCgiOLUJJ ShwX5fQM+78gpOwGVKx18iC/lbmlTpCum4CjX1s+TVxRY9pVV4XLZs0RpCYqeZmPGGhR Ujvg+FnkDEbr2tTElFfaBrx23cpMRtHBY7FET3b+din4x+yo3WTsHM59MwwqPqmVe67m lgqBtb7atMRDzbVkUv89bJPlTKFMEqfB3hF0o7v0AyqW8lcxAatRhPtCBRY0+iC95JwM 2lfLCZ6JA9zfM/l4juEWaTcGX+CEHX/cLianW/tpaU5F8J1my+gMwAaC8orBe+J4TJqK 2Lxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735232862; x=1735837662; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xUsbn4qH8t1xPQtCnq4fSt0rQiZva/B8CbRLcxSWyZc=; b=pb2j6cSEUBz96/BmwdG41Rss36qE9w8R1QFx3W8wzxPVInxcHQXb9NZb8MNkWHqrqf a7hG5uatG87qtasKhX/S+XMYvIlh3/ZAwZOcLz8aCnAtb7MlyFF0Wi3lTyu/06c4aFRM qgPjLa09tnN+oK2IAss+LrWgB06l+GKMGpG7mgmBWw5r7GnR2s7JRI6pYBKanfpTqPPr Kb1sQEL/2YeQiXYkMh5m+GjAJ2wxdj07PoUdiGYSA8uELTrJbi8rfvjH3XsR1MA72kz8 N4NspbFjJgWlaeI+S6iHXmQQhxcDaztKyMirV6JrcK/O0PCEaV+V2MSY1QVc0x+lxBzc nZRA== X-Forwarded-Encrypted: i=1; AJvYcCX7IryqDHtHf8YvtZL+jRPXL7N3dmimla6FdznHMHqGIHGz5zH78lUf/kXh8FfcE7K0uguqY/RasA==@kvack.org X-Gm-Message-State: AOJu0YyKoqQJOwX29963dTIW/MretwimIo6zGOtj0Uh0bdAs2ErifkFX og6zn9AbigQj9gGEplKa0PU7Q92xP7mMO+nxXoyeLduLfvqlHM1CCXi3+MHruS+OwrGqRtH/t8K UiQ== X-Google-Smtp-Source: AGHT+IHO9Ln0yzQuO/gGPJ05vT5Etg8K56O3yhDTNfimuOaarwxAJxZRy61wRjd4sM0SxPCpRMWjAjCVTBo= X-Received: from plbkz8.prod.google.com ([2002:a17:902:f9c8:b0:216:69eb:bd08]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:dad1:b0:216:725c:a137 with SMTP id d9443c01a7336-219e6ebc7dfmr279213785ad.28.1735232862238; Thu, 26 Dec 2024 09:07:42 -0800 (PST) Date: Thu, 26 Dec 2024 09:07:06 -0800 In-Reply-To: <20241226170710.1159679-1-surenb@google.com> Mime-Version: 1.0 References: <20241226170710.1159679-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241226170710.1159679-15-surenb@google.com> Subject: [PATCH v7 14/17] mm: remove extra vma_numab_state_init() call From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Queue-Id: 907C040005 X-Stat-Signature: oi6ndnuf1or8rrq3bzrdo15r4q5bzgig X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1735232831-864036 X-HE-Meta: U2FsdGVkX19EOstrjK9ewR+UEfQovE8H2MJ7EqWkNCo6z4HG179gscP6immBX8Wl4RbbDjTIfQmLvMvevrzbyZIpsp6ofRaG+jnN5P0kLD5+DWl1ynANxQAxLxsjgQHHPErTzbQ6wucrF4hrq65YMO+u/YegjRdEtESYc+GPqMyKI8GtEZS9s1DhDuO0hGiyv30JCSP8CHE1r26FSVd1JjBdybsbJ6gmS6KQVnPm/CnNH5U9yBpXmt13NAzik2MF3CD98BQFIsHaiArkNio0s3WaycxUv558Ur03G14lb87iF9gd/NugI6D6vklgVfHXeP7naDvu341WDjbqkKulRrB6tAuWB6rT80Jw875/SKXoc/y+KL+OVG2oI3k9MY1v3rbMUZqkfmc/ew2uAOuO8VPkwxdaSmj7iE/ZQFYGCoHYwIrCWmVjzpBZa/vBUzNj6lN6xa7vUoXqaQnIxbApHc0iIPT4Vj2NWPYE0kn1XrsGWuXqpCv0DOx1oeUJIPhFlf9oUh52o+z6ty2gtV69AipiWoeLeRT75eVJc/rWKm/4YxFhcCtkFu2/MsgHgwETj14pf4V/NsoiqFXSxthiAzV72zskm55fzH5mHbmKP43snSQEExTFiWd3dq0q/vkEekIJQtweKs+HPM+nAmxom5BYqFRCivr+gTJ3XLHp0slNiK0igHRdToIimW+FOfdFQZf5lPVm39D7dIftL+d2HAQYMyj7wyaXrv0TsXqSBb4EIVqyi31FAYh2mEckinc9t2WtWU5aMSXHvI1N3UBvo6CNcQyeBEVNXpOtMBt7E34toXBJd2+4vwnZyrKMyI6YesQg/aBcxEQ5XQAmxUxB6LGvKMJ+CU7mjI2niFA5Av/P3trXtR26Am3HyDuDE1Q6PbEKRyfsYe2C6nskh+PgMZWN7vi8dra35v5mvjvuc94l+VCZdboRvWuay/1/ji8RTfBAaPmXuKaP6fySlk8 ttu73CoZ 4a3PDeTLer/NHkkrjcSNq7bV88SwzTkpHDbQaYnmph/5l/JW1p0i/rYcsoprmPcDhhMu8U8IrSbKLR5qHHgKE2JiRt+Z9ByzzEo6zIBx1srOsqHNE2O6PUA2kE2Q7FS/lmF6oXCI5h4He/Q1fI5Xfwehp7xZhDmgRuxcFrdluRt6beYZ7yCJvLZvYCDz1oaoxXUg2BmFhPORupuDJ7sFXRBOj9vasb9rx9yju3ylj/mbkEJOfTj4g1OKinodcDNAYkycMQ/lCo982MeWcfbNGDq44PEmv9IYlOHMAbYOmE1on4lfb9mWFpPB7PQM9VkKzs8HX3b/NjWep33jrZF8KTxrMuLeqQ5Wu+4je5KhbKHZdBkcejZtd5S3dL73v8lCRfsAyUigtUE516zh1kYWR6MHW3o1J8/q7vfI8paq1X8ol/s4vioBZdNolwYEV8iyVE/3FIpOqMNldX8E+4BRZkOUlKj2i2KlxEA2AppIHc8ucLGfq1lMr3SqkGUHcD7OyqVwq4v064yAlQjH7WkiQktLIEpEWMsYmFjV3iCsZiISrfh111DOmvocirIOi+6BCE+jQnDi9O9ComfbI7IXgt2aiJg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: vma_init() already memset's the whole vm_area_struct to 0, so there is no need to an additional vma_numab_state_init(). Signed-off-by: Suren Baghdasaryan --- include/linux/mm.h | 1 - 1 file changed, 1 deletion(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 99f4720d7e51..40bbe815df11 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -947,7 +947,6 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_mm = mm; vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); - vma_numab_state_init(vma); vma_lockdep_init(vma); vma_init_lock(vma, false); } From patchwork Thu Dec 26 17:07:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13921396 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D60A4E7718F for ; Thu, 26 Dec 2024 17:07:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ACD2E6B00A7; Thu, 26 Dec 2024 12:07:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A7C8D6B00A8; Thu, 26 Dec 2024 12:07:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8F5936B00A9; Thu, 26 Dec 2024 12:07:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 6CE8E6B00A7 for ; Thu, 26 Dec 2024 12:07:47 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 2FCF11409CB for ; Thu, 26 Dec 2024 17:07:47 +0000 (UTC) X-FDA: 82937740074.03.34C9343 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf14.hostedemail.com (Postfix) with ESMTP id F380B10001E for ; Thu, 26 Dec 2024 17:06:58 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=3eYaom9e; spf=pass (imf14.hostedemail.com: domain of 3YI1tZwYKCHMjliVeSXffXcV.TfdcZelo-ddbmRTb.fiX@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3YI1tZwYKCHMjliVeSXffXcV.TfdcZelo-ddbmRTb.fiX@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735232826; a=rsa-sha256; cv=none; b=iGe18AGrfdeDunURH894iLKPbNFBWm66/0KcEtVAWoBa4IUjf0NaAGQaL4FMLD+T+hqyge u1yWO9xJ3/FKLHATXP/IpP8UGB/emw/tcScqQKIxrUoGquiEnqlit8aHMB1bnPf+xNMwkK MVzDFZhtOqOwIHhLa4SwK+dTxEQ3akY= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=3eYaom9e; spf=pass (imf14.hostedemail.com: domain of 3YI1tZwYKCHMjliVeSXffXcV.TfdcZelo-ddbmRTb.fiX@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3YI1tZwYKCHMjliVeSXffXcV.TfdcZelo-ddbmRTb.fiX@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735232826; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wJt30IjlBNVAevAu7cS4hzA6cm3MdBwy61U0C7q4UdM=; b=LnWZCc+Y5Q5q+duFvtgrd3UZol2ngmLCMi4hq+guiViwJmR5nzAW1M5khVQwcNtwFlQAQp DZEwfjOeAuHhbeSIM41XsA5zjSVzXZpXNKPGTt5Wh3XGsGLtXxl1Yg/yABeCFvhQ/tOBNn el/xaB/UqbfP40g+OvruOairQkbi0rQ= Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2162259a5dcso144838925ad.3 for ; Thu, 26 Dec 2024 09:07:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1735232864; x=1735837664; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=wJt30IjlBNVAevAu7cS4hzA6cm3MdBwy61U0C7q4UdM=; b=3eYaom9eDhuolfy6DgsA8QO+1+UaiGPmD9//+pA0CwxtQACewjKFUxA6xJW9d+igCz WTYlF7pwHCLBi9TCSfV6cdlv0H69IqkLGalu8SfjgyNUpkKhIYzrGXkvb4uZ8ZlgwYkX kpuUi8S4nXmuTu2cTjTTSn9HfZXNHmAyJ7Z8cc6ukdWaYmq+AY0TMTqEuX0DzJdngqXi J9nlkCN8Yz9fSfLv9W3XJcQ+rlFqql9uU7QVF2xWwbef446BRv3XyqF6KEWhc2jd/8h/ 3uUsx7B+SHssj6VDvgZzzwDASPBRBRnyVRS5UKaPaOAlVOTqONlzdKJacoRrXJ63/DCK WdLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735232864; x=1735837664; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wJt30IjlBNVAevAu7cS4hzA6cm3MdBwy61U0C7q4UdM=; b=a18g/JxrM68Wi/BWt7A+kV8uFUV9TI4AIbwyPZu8kxaD5r8oM8ItTvk+4oumVkIQ4V PcxNlfp7kFrAM6SPgqCCs+HtU8yoUtgx7cQgivnNlt1ASX171G8ucVK64XToZEHyL3MU X2fR5l4gGO678s8ycf/7PrdTw56DBbyE9WMUNUU9RrwCnfec5Gm6S7Z96S3K/vSjloO8 tMy+L9xwEuYvegzNAPB2aGfcam5O3D9TsYPjaVBJcYw4Y3Vf7NIuj01DG2jIpwZY67GD q8Lc7BtJVLf8M3wx/UN4++8xeVvjoKZtQtE8TG0L8N0YPSbRqM+xezvrBrX2zM32bSvv Lf0g== X-Forwarded-Encrypted: i=1; AJvYcCWIL+N3/H7KTe+xQ36D/Qk1ben3nJYnP6SYTznecntg9ub4nf7uctZypOLzcfd+mKDXuUbzMuWbMQ==@kvack.org X-Gm-Message-State: AOJu0YwAoQ1o5GA8VyvTyKwBH8vYXA/oUGBXMK6kZggNE9rMn6aw6phc 7eEYN6PiaZK8yWUwU8wXmZfxH2ouVWMaZb8EYg6zH1mxr71HrU0tRf98kxWDdbUma1yVSfzKkvE B9Q== X-Google-Smtp-Source: AGHT+IFXJnhfwZOHdM4geNls1CRjrBe2ovOlMU/gzO5O0nW/gc0UsQXMJYqFrpyCsqUXFXBoYnj2R8jomuo= X-Received: from pgjg7.prod.google.com ([2002:a63:dd47:0:b0:801:9858:ef95]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:339e:b0:1e1:aba4:209c with SMTP id adf61e73a8af0-1e5e07ffc0amr38092469637.29.1735232864140; Thu, 26 Dec 2024 09:07:44 -0800 (PST) Date: Thu, 26 Dec 2024 09:07:07 -0800 In-Reply-To: <20241226170710.1159679-1-surenb@google.com> Mime-Version: 1.0 References: <20241226170710.1159679-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241226170710.1159679-16-surenb@google.com> Subject: [PATCH v7 15/17] mm: prepare lock_vma_under_rcu() for vma reuse possibility From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Queue-Id: F380B10001E X-Stat-Signature: da8emjcjg37ukrjj4esjcm9dgothbbj7 X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1735232818-842059 X-HE-Meta: U2FsdGVkX1+IKGgM6DiWJj++jykE05V13DnC9qrBKHxCn2rVIHAnITi+NyR9m3Jm516cKEyI8n1Oghb85tHE9ApP7+YIwgQ5MMeC7HaDudGOeKp2vDOnJ12nqx7Ccutl1dfRZzU0mObqURPgSq5TdUD3FRjdz8mfdRRN+ab66gWwjGv8+EZO4zuTC1zUPQwj7zYG7r7jAwmxMly1QU3Czn+RSSITaAUY+04L/QemiThwcbGI0CmRBpL4KO3qYh52DGPHJMGfY3r8QZjb+s92fPWNTW+boJQEyWMopvjzeLDB0oqquhbaJwkMwvReuZaqLIl5buWomhEFgTHjxv0uUmmh1wS2OKunPR3Yd5zaDe+Vl+LEjod8bNvAph+m/XRKVbx0kZKoleLojFoE02+xuCXPkVVL5Af+M+Ll9cIK+Yuk9/Z4OkPtsc1zIvcZIGvPYGXbkIC8kZe7eIMQ21N5AVngUSVs13jH3rBJ96JREyb6K+76NHTfImRL0rEpES9A4VaQwgKN/1gYRg4vpqeOshL84Uo6IYuspuVwRXeTzkVrIUhMFL7S388A9jlvQdt5yEQkmqDfEhSoi58roErPoNUrI292I8F7qerd8ykbA5tWu1qeL/jeAer14ZXIUAUn92pZLfvuFjOzx2nrWI3TTioHnv4Z0x/a/V3gujZCcoEpMP6pAmacmrMlFEotN3LQNneEago06Yz0VcyqGVx8zFGkL2YjGGozDj7clK8sh73uatVJF2MUOBqiX0Todar2JM4+p/O/oKo7kiq8jjlWn2b3OMSm1wnCHmgFMxaWCMW3fCtgGZvmQwavhIQ0n1+um76Sn9mXakSWoq7thceLqqC5hP0tNLpo5w1MTgOBoiNJjhKtaq0ebBWPnjEF2AeJbj/PKIIRoOX3Ct4Qn+5kgg0To1KQ1GuTH1dpddDEIOtMdYI21IpmCci2QJ6nFttNg7vgwfh1AGizb8IuC0Y aqMC8Djl YFx3JFCQai23isQQB2JxLwL94vZ2pfQ6RRqT4rMBw0MTj6s71FyU5/0FAqXVoDA1EYLe66QnoRaVJCxOkAxMiu3Ifmuz5ooRDUM7tXQ+Eroh1fH7e8BAQN5hl/A8nHDnyvus3vCvlL/NxaKhTYtBJ7zp3i5fdo85FtBspJdKQFHN4Jruh+MEM8FoNTYdnmhYsQSi05W8DPNA4VjOzVLpJ0QuPkZ2AaRr1Ww8rnkss2VF/hgo8bnlvxT6xvqOyh1qXzIBk304DRQcNQ+/zxhtjo2Io2QyPXYsjFhvLs4d5qdIBqydAVgrC0tBJndsuLq77zytgMUOHjXDtC3mOn2ezXj8Y22nZRtZAp9Rc6qC3n/0aoQ5NICPpV2UwwlVqn6DYoiVsZVgCvkPIpAZWkXkfsektE9406T8YmscOYbAPd9slBE+octogO0h/Mzwm+wm0IXxbOkrEchKesqvRHCkVlCd9IDEBYuunZSZGtvUBElPMz9u4bqmTaiKba08IWO8i3nx2My+E9agI8LWTq7jSGgij6D8tpvRWUY7Lg3z+mxPf1mhmcraCqYXAIFcwsIpsDPxCAhr6cMlRyvUFlr0hbBlp/g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Once we make vma cache SLAB_TYPESAFE_BY_RCU, it will be possible for a vma to be reused and attached to another mm after lock_vma_under_rcu() locks the vma. lock_vma_under_rcu() should ensure that vma_start_read() is using the original mm and after locking the vma it should ensure that vma->vm_mm has not changed from under us. Signed-off-by: Suren Baghdasaryan --- include/linux/mm.h | 10 ++++++---- mm/memory.c | 7 ++++--- 2 files changed, 10 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 40bbe815df11..56a7d70ca5bd 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -730,8 +730,10 @@ static inline void vma_refcount_put(struct vm_area_struct *vma) * Try to read-lock a vma. The function is allowed to occasionally yield false * locked result to avoid performance overhead, in which case we fall back to * using mmap_lock. The function should never yield false unlocked result. + * False locked result is possible if mm_lock_seq overflows or if vma gets + * reused and attached to a different mm before we lock it. */ -static inline bool vma_start_read(struct vm_area_struct *vma) +static inline bool vma_start_read(struct mm_struct *mm, struct vm_area_struct *vma) { int oldcnt; @@ -742,7 +744,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * we don't rely on for anything - the mm_lock_seq read against which we * need ordering is below. */ - if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence)) + if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(mm->mm_lock_seq.sequence)) return false; @@ -767,7 +769,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * This pairs with RELEASE semantics in vma_end_write_all(). */ if (unlikely(oldcnt & VMA_LOCK_OFFSET || - vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { + vma->vm_lock_seq == raw_read_seqcount(&mm->mm_lock_seq))) { vma_refcount_put(vma); return false; } @@ -905,7 +907,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, static inline void vma_lockdep_init(struct vm_area_struct *vma) {} static inline void vma_init_lock(struct vm_area_struct *vma, bool reset_refcnt) {} -static inline bool vma_start_read(struct vm_area_struct *vma) +static inline bool vma_start_read(struct mm_struct *mm, struct vm_area_struct *vma) { return false; } static inline void vma_end_read(struct vm_area_struct *vma) {} static inline void vma_start_write(struct vm_area_struct *vma) {} diff --git a/mm/memory.c b/mm/memory.c index 2def47b5dff0..9cc93c2f79f3 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6414,7 +6414,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, if (!vma) goto inval; - if (!vma_start_read(vma)) + if (!vma_start_read(mm, vma)) goto inval; /* @@ -6424,8 +6424,9 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, * fields are accessible for RCU readers. */ - /* Check since vm_start/vm_end might change before we lock the VMA */ - if (unlikely(address < vma->vm_start || address >= vma->vm_end)) + /* Check if the vma we locked is the right one. */ + if (unlikely(vma->vm_mm != mm || + address < vma->vm_start || address >= vma->vm_end)) goto inval_end_read; rcu_read_unlock(); From patchwork Thu Dec 26 17:07:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13921397 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B7C3E7718E for ; Thu, 26 Dec 2024 17:07:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6D5466B00A8; Thu, 26 Dec 2024 12:07:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 683146B00A9; Thu, 26 Dec 2024 12:07:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4D8F26B00AA; Thu, 26 Dec 2024 12:07:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 2D9186B00A8 for ; Thu, 26 Dec 2024 12:07:49 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id DF2DB161B5B for ; Thu, 26 Dec 2024 17:07:48 +0000 (UTC) X-FDA: 82937740620.07.126A58C Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf05.hostedemail.com (Postfix) with ESMTP id B8BB910000C for ; Thu, 26 Dec 2024 17:06:26 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=VhE3nH1P; spf=pass (imf05.hostedemail.com: domain of 3YY1tZwYKCHQkmjWfTYggYdW.Ugedafmp-eecnSUc.gjY@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3YY1tZwYKCHQkmjWfTYggYdW.Ugedafmp-eecnSUc.gjY@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735232847; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xy66KyEUcK4AYkPdjflJwK6nWoE6f14AOpge1PsZznw=; b=C2AsB1/wRTF645epjqqlixzmnCOlByp9S+ZTkbHHpBOTxZ+/+dW89sw9MSYHgWkIoSIZiq wH1x29TknYMjfqbCoe6h0lxrbxY6s+ms/3ort+iMLSz4ZGFk8RlhnGeX4o5vE+PGHVThsC NKgZDm13YmsH00HKKJmp8JIO2tnzPwE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735232847; a=rsa-sha256; cv=none; b=DRfVJlAc8axP+FQ0o4S2KiAqOqcYnZ95CIU7GFbNTJvibIwBomzSVBz3ojerrf/ZMT85F0 Zefn7cIPCx1kfC/Xzuky2jCLT7tOsHHewfXnuSH9aZkUw9ElPEIH6FKZKIgiXCfhfMroe2 OTRY3DgrdnewheBYTpDCmD7uBILc79k= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=VhE3nH1P; spf=pass (imf05.hostedemail.com: domain of 3YY1tZwYKCHQkmjWfTYggYdW.Ugedafmp-eecnSUc.gjY@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3YY1tZwYKCHQkmjWfTYggYdW.Ugedafmp-eecnSUc.gjY@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ef8c7ef51dso7828678a91.1 for ; Thu, 26 Dec 2024 09:07:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1735232866; x=1735837666; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xy66KyEUcK4AYkPdjflJwK6nWoE6f14AOpge1PsZznw=; b=VhE3nH1PKLE9/Y7SQIXC/2dGnn9VFrfO84sBhKUn2HK8alLrUclP3F6ghGrEHwvdw+ L+mg/neHfkantrOYCYyKrSfCKJQwU8GJqKbeqiH/mbRV9R4ZhDnS4/pdvJ4v+Twt6Nwr LAdn/gIoqBseH/MHsuLhvP++00jg8xRQHcUjWzI2pVIantyG9g/EL8rUywDanmPw9pZK LhJw35fOdVPOAmAX4VCFNcySmfAbb9OyOCu8sXIQcBL9RH3fM3LfKh4YeNjmTw9wKybk yBpJDj0qCtP7phlnkoX0p8nK6gXv80nIcT5I6UvpD5ouwGzSgbXMmCSAFnZgUEXryt7C eGMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735232866; x=1735837666; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xy66KyEUcK4AYkPdjflJwK6nWoE6f14AOpge1PsZznw=; b=sNVLP4OGUarpEuMUNfddWkj5fB87c1XVTohFs/sQkucG4k9QKe439YIim3zYpH+cfu YayTYJCyRhHFrz96Bzww7HVeOTBM55TxtyAQoF2bwmJ5KSpiAl1oEguu43PPBU9h5Ep/ FS70bCr8iqqOgHl0ez0r57KvYS2zJ8S7O+yC+lcotyVjh4xq2SZF0YHegHSI2uPlVIMp 7Hl/IVOMANa1APROxJuCNRZ2Snu1oEEVDKPTYE3q5G6VWOphUpMQZr6ydBqgeXJGnef2 ZyBUb1cDh5/R2RbOoTZNHEC7Am3zkqbbJPNdXpjDCiehnEhZlmKcSx/hfEN2gfzp+aD0 g3lA== X-Forwarded-Encrypted: i=1; AJvYcCVZpADaFykmmRhc0JutAXeS8FAda89xrnJ1QK1DhgK78UraEStdYzUUMqK4s8HJ/7LqYU0S9PoCfw==@kvack.org X-Gm-Message-State: AOJu0YyMluPLBRcfJps2Qv8X17QyhgTGGSw4ykNk5yGzRioxKULSGwAM 7pSh59TrR7in4G897yy906beW+VyV6Y4VA3ozcD9CD6GeuEctlmpWnY28YhVs9a2p5CnEoqF8MW GCQ== X-Google-Smtp-Source: AGHT+IGAr2xLbuXi2R908ZY5it7N2onoqChj/Zox7EvU+cfvoLneDmddmW0yL/ElwVm7zYdY6Kk+YCxDfGI= X-Received: from pjbpq10.prod.google.com ([2002:a17:90b:3d8a:b0:2ee:4b37:f869]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3a43:b0:2ee:94a0:255c with SMTP id 98e67ed59e1d1-2f4536d25fcmr36257223a91.13.1735232865882; Thu, 26 Dec 2024 09:07:45 -0800 (PST) Date: Thu, 26 Dec 2024 09:07:08 -0800 In-Reply-To: <20241226170710.1159679-1-surenb@google.com> Mime-Version: 1.0 References: <20241226170710.1159679-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241226170710.1159679-17-surenb@google.com> Subject: [PATCH v7 16/17] mm: make vma cache SLAB_TYPESAFE_BY_RCU From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: B8BB910000C X-Stat-Signature: ptphpzguwfca3itb55y989ajkk8zjy1c X-Rspam-User: X-HE-Tag: 1735232786-522001 X-HE-Meta: U2FsdGVkX1+dY+Bp+GZkZqecFsKO2bNeiWZ+sWEBgSM4+u0hTny1L14fDyn70kpWhjwGvRItAbSYFViT50pAIQC4aUw9c18npVXcGAoav1t9ezqZi/8DBk/xYE/UA/1QyvNRgpZ3Q+NqgD8/c7WBlMMWvUo6w3cu7DLIGOu6cmog2PJJAQfCs3uiCnGEtkGfh/aavVs/PsbLoxpIbz4DNtm3RKzkVMq07cjGigwQvpDb/IGg9VI7fDAry3bOdcI/HQikqrJoz0dHUnstE9jJn0naV/TYB6eY1MPy7ab2jPFityCKEdpAjOA0E8u1hgDPfFF6TCKPuTW3GZBCEYZwk5VLdHyCHDgjiC6RTRK7Y/Z+3oVODWziyboiDR7AWVzA1KCor+8qERWSkaXEq89kIprMXvBKBbW+HCoNH+jz7noo/DkhwwCYSpWHjnCpotwrOegVERhjbiFLwjlf6t0nw4Z2E+i3nzAcJXGtORlVPz4okttPCaa2j8xprC0VsFWxllqyTIvwT8B34CHqAbpLJcNXaBSNm30D+PMmodPuaQlz9W3suhk+X7Fsv2pD1f7+SwgoVxvMNXzU1mvV0rBfpkFaw5FlGs3P5wXasdSfYnHDwdTnasmvEEVoLTNDACu/jpWrlNDrlE251W3O6s1lLEZ7CQBn3MZZZ2Z853GSeNPjdVGZWFBHGPaarWlndMsm6xcE+B2MVdmP75S7tJS7yp05a43azce5Ki1Qjh1UqRirV+tr11RA/Qu4ADxGhXoR1kIAhmGU90DpOuPNnPv0b4Lok1El6YSNfpBLCcqHWgae7uKmBbHDptiEBMibd3/ehUlgKtsUbXmS9450kozUUxQ97aX62SHt0XcwSN5WC7kCTsGopcq8ZL9Eica5sdw5O9PuL156BhLdrvZ0+qkWWnWWBJLl2L3TyiKSDIYlRCiQfXHJX9lxW5T6rKVHibxKkhcFgFBlUg7osvvD87S iPFzqDrK 7jDPiVFIkOQW7Vd/hJE1BuuI+GPARZHrrlkmXQnHAztN7lwkX8aYEAocGRauS9lxZhKQA2VJ+o7S72BoBZw49mbtPI7/g3gSVXJGvIxTDugNRpzLExUKPWlL078dQDkX9YYZLm0HFsh1r6zyoXm1tju+3BKZGb1wG+foFugQxDiJrVpkRkTI5QcDI5YK7K/3HswPDdGG6Vxekj28c9iHXcNRxEJVTcPvBcaFKb5kWilEwUIpkmhMyLnAW0WMzh7lKmHq9gJSu3XJHYr1/IjDY3OGEv/KdkALiiX564N9LLVtXRiqMgoWqa0AvmBXuFwDLQM2G2ftkjnk8QOM1D9KHEEk93LdsTgY0POan/02Lfw0tz7Jyn6ke7kwfUHw6CW3KUHF81bfZZxDqC4c+VkocM9T8JtRmdUl0P6bBHuo23HMfuLE8g1vzakxnYSF+ITa6HPYWtmt2NWfa7NmoRCDP3wkEd3z9Spy71ru9FojXf/MhgvGyqoCBMcikdayfKvHeZ8r7W1Iy0a9Myk501Yq/Qg/mirFNqcgJdIKQ7fHE2L8nAlfCI10YoQTnjMzm614xYLT2IKuELmMeZlJoETD8EEN3CQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To enable SLAB_TYPESAFE_BY_RCU for vma cache we need to ensure that object reuse before RCU grace period is over will be detected by lock_vma_under_rcu(). Current checks are sufficient as long as vma is detached before it is freed. Implement this guarantee by calling vma_ensure_detached() before vma is freed and make vm_area_cachep SLAB_TYPESAFE_BY_RCU. This will facilitate vm_area_struct reuse and will minimize the number of call_rcu() calls. Signed-off-by: Suren Baghdasaryan --- include/linux/mm.h | 2 -- include/linux/mm_types.h | 10 +++++++--- include/linux/slab.h | 6 ------ kernel/fork.c | 31 +++++++++---------------------- mm/mmap.c | 3 ++- mm/vma.c | 10 +++------- mm/vma.h | 2 +- tools/testing/vma/vma_internal.h | 7 +------ 8 files changed, 23 insertions(+), 48 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 56a7d70ca5bd..017d70e1d432 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -258,8 +258,6 @@ void setup_initial_init_mm(void *start_code, void *end_code, struct vm_area_struct *vm_area_alloc(struct mm_struct *); struct vm_area_struct *vm_area_dup(struct vm_area_struct *); void vm_area_free(struct vm_area_struct *); -/* Use only if VMA has no other users */ -void __vm_area_free(struct vm_area_struct *vma); #ifndef CONFIG_MMU extern struct rb_root nommu_region_tree; diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index b5312421dec6..3ca4695f6d0f 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -574,6 +574,12 @@ static inline void *folio_get_private(struct folio *folio) typedef unsigned long vm_flags_t; +/* + * freeptr_t represents a SLUB freelist pointer, which might be encoded + * and not dereferenceable if CONFIG_SLAB_FREELIST_HARDENED is enabled. + */ +typedef struct { unsigned long v; } freeptr_t; + /* * A region containing a mapping of a non-memory backed file under NOMMU * conditions. These are held in a global tree and are pinned by the VMAs that @@ -687,9 +693,7 @@ struct vm_area_struct { unsigned long vm_start; unsigned long vm_end; }; -#ifdef CONFIG_PER_VMA_LOCK - struct rcu_head vm_rcu; /* Used for deferred freeing. */ -#endif + freeptr_t vm_freeptr; /* Pointer used by SLAB_TYPESAFE_BY_RCU */ }; /* diff --git a/include/linux/slab.h b/include/linux/slab.h index 10a971c2bde3..681b685b6c4e 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -234,12 +234,6 @@ enum _slab_flag_bits { #define SLAB_NO_OBJ_EXT __SLAB_FLAG_UNUSED #endif -/* - * freeptr_t represents a SLUB freelist pointer, which might be encoded - * and not dereferenceable if CONFIG_SLAB_FREELIST_HARDENED is enabled. - */ -typedef struct { unsigned long v; } freeptr_t; - /* * ZERO_SIZE_PTR will be returned for zero sized kmalloc requests. * diff --git a/kernel/fork.c b/kernel/fork.c index 7a0800d48112..da3b1ebfd282 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -471,7 +471,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) return new; } -void __vm_area_free(struct vm_area_struct *vma) +void vm_area_free(struct vm_area_struct *vma) { /* The vma should be detached while being destroyed. */ vma_assert_detached(vma); @@ -480,25 +480,6 @@ void __vm_area_free(struct vm_area_struct *vma) kmem_cache_free(vm_area_cachep, vma); } -#ifdef CONFIG_PER_VMA_LOCK -static void vm_area_free_rcu_cb(struct rcu_head *head) -{ - struct vm_area_struct *vma = container_of(head, struct vm_area_struct, - vm_rcu); - - __vm_area_free(vma); -} -#endif - -void vm_area_free(struct vm_area_struct *vma) -{ -#ifdef CONFIG_PER_VMA_LOCK - call_rcu(&vma->vm_rcu, vm_area_free_rcu_cb); -#else - __vm_area_free(vma); -#endif -} - static void account_kernel_stack(struct task_struct *tsk, int account) { if (IS_ENABLED(CONFIG_VMAP_STACK)) { @@ -3144,6 +3125,11 @@ void __init mm_cache_init(void) void __init proc_caches_init(void) { + struct kmem_cache_args args = { + .use_freeptr_offset = true, + .freeptr_offset = offsetof(struct vm_area_struct, vm_freeptr), + }; + sighand_cachep = kmem_cache_create("sighand_cache", sizeof(struct sighand_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU| @@ -3160,8 +3146,9 @@ void __init proc_caches_init(void) sizeof(struct fs_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, NULL); - vm_area_cachep = KMEM_CACHE(vm_area_struct, - SLAB_HWCACHE_ALIGN|SLAB_NO_MERGE|SLAB_PANIC| + vm_area_cachep = kmem_cache_create("vm_area_struct", + sizeof(struct vm_area_struct), &args, + SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU| SLAB_ACCOUNT); mmap_init(); nsproxy_cache_init(); diff --git a/mm/mmap.c b/mm/mmap.c index 3cc8de07411d..7fdc4207fe98 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1364,7 +1364,8 @@ void exit_mmap(struct mm_struct *mm) do { if (vma->vm_flags & VM_ACCOUNT) nr_accounted += vma_pages(vma); - remove_vma(vma, /* unreachable = */ true); + vma_mark_detached(vma); + remove_vma(vma); count++; cond_resched(); vma = vma_next(&vmi); diff --git a/mm/vma.c b/mm/vma.c index 4a3deb6f9662..e37eb384d118 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -406,18 +406,14 @@ static bool can_vma_merge_right(struct vma_merge_struct *vmg, /* * Close a vm structure and free it. */ -void remove_vma(struct vm_area_struct *vma, bool unreachable) +void remove_vma(struct vm_area_struct *vma) { might_sleep(); vma_close(vma); if (vma->vm_file) fput(vma->vm_file); mpol_put(vma_policy(vma)); - if (unreachable) { - vma_mark_detached(vma); - __vm_area_free(vma); - } else - vm_area_free(vma); + vm_area_free(vma); } /* @@ -1199,7 +1195,7 @@ static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms, /* Remove and clean up vmas */ mas_set(mas_detach, 0); mas_for_each(mas_detach, vma, ULONG_MAX) - remove_vma(vma, /* unreachable = */ false); + remove_vma(vma); vm_unacct_memory(vms->nr_accounted); validate_mm(mm); diff --git a/mm/vma.h b/mm/vma.h index 18c9e49b1eae..d6803626151d 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -173,7 +173,7 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm, unsigned long start, size_t len, struct list_head *uf, bool unlock); -void remove_vma(struct vm_area_struct *vma, bool unreachable); +void remove_vma(struct vm_area_struct *vma); void unmap_region(struct ma_state *mas, struct vm_area_struct *vma, struct vm_area_struct *prev, struct vm_area_struct *next); diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index 1e8cd2f013fa..c7c580ec9a2d 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -693,14 +693,9 @@ static inline void mpol_put(struct mempolicy *) { } -static inline void __vm_area_free(struct vm_area_struct *vma) -{ - free(vma); -} - static inline void vm_area_free(struct vm_area_struct *vma) { - __vm_area_free(vma); + free(vma); } static inline void lru_add_drain(void) From patchwork Thu Dec 26 17:07:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13921398 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA97FE77188 for ; Thu, 26 Dec 2024 17:07:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CA19C6B00A9; Thu, 26 Dec 2024 12:07:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C28AB6B00AA; Thu, 26 Dec 2024 12:07:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A2D6A6B00AB; Thu, 26 Dec 2024 12:07:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 80CA46B00A9 for ; Thu, 26 Dec 2024 12:07:51 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 39439C103B for ; Thu, 26 Dec 2024 17:07:51 +0000 (UTC) X-FDA: 82937740662.25.F4660F5 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf09.hostedemail.com (Postfix) with ESMTP id 857FC14000E for ; Thu, 26 Dec 2024 17:07:20 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=fY7MblCN; spf=pass (imf09.hostedemail.com: domain of 3Y41tZwYKCHYmolYhVaiiafY.Wigfchor-ggepUWe.ila@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3Y41tZwYKCHYmolYhVaiiafY.Wigfchor-ggepUWe.ila@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735232820; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4HhxNlm5l3oc+a05Upn5SwGTABo4qTWEjrE4UxpsHII=; b=n0bYxFZhcDHg7gKjgV3lNbcRW7Y3pA8DAyN8Hyy1GpFXtVYUSm3QbkIuuZ+CCpIKNBMit2 1dI44LBloc17iWKI59cl0pWPYp3D7e3OvToEqF5doRsf4dJFyZJ0wf0DYR1/caBm/7zzzW t6jC4m1USaVTz+IJAxYA/Ud7o3/OIss= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735232820; a=rsa-sha256; cv=none; b=PG0CIDRHVdEEuitn6yYCxWpKKnSuiwIlmr8GZYOtWVDhSv+qtIURzSQZYfWLf1UPojkzf3 64QonOeUd6Fupi0Osg0iCMdHAlKS6E56TggHPveQOdprpCsr1SKEPofc5+Hp4ZY6b+o3Hc p5PF/hexfjpMLzJZ6lInNlEMfI0GZEU= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=fY7MblCN; spf=pass (imf09.hostedemail.com: domain of 3Y41tZwYKCHYmolYhVaiiafY.Wigfchor-ggepUWe.ila@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3Y41tZwYKCHYmolYhVaiiafY.Wigfchor-ggepUWe.ila@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ef9e4c5343so12323502a91.0 for ; Thu, 26 Dec 2024 09:07:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1735232868; x=1735837668; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4HhxNlm5l3oc+a05Upn5SwGTABo4qTWEjrE4UxpsHII=; b=fY7MblCNbdDqi0khH8qtMEB/DS/OtO+fxdclXr3o0k5rvnbxRvRrVg/SKcDmu8cu/I TseWxdHI3j+QHHE2f45uNV4yj2cXClnsVpm/4bCn7ITsiDlKmQLrKbKrZDr1Sfk1p9Ej X2tBtTrRTaupsAbzwZsB7UoiJNA3tKs3xHbarTjT/qcVSRSY0ikKvJlLCxnTs57wRawh MKb6WbZLSU9BVHS8JWRo6Q3ESA8mRk1klcchDOA0skCIaYPm8GsAcEvskTszVGQTtJDM iSj6/0tq3ymKqUGG9CWvBygGhcgdV/KMGiLtcOjA11HRNyBSNmXo4KMGdI8Hd4iP9ZyK 1H/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735232868; x=1735837668; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4HhxNlm5l3oc+a05Upn5SwGTABo4qTWEjrE4UxpsHII=; b=uxjL3GJcGlcHJBHYmE75Kf5ey81xebQ169O5OAmjWT8G3qCLZwfkGDtpemOjyXH5Ai h1k055yOKYYHW02U/a8xiM6ga/w/AeSm1mahBcmhThKlMKv7DqnNiAhgBfJb+8b9StF1 kVAukgWDQYhlT31b7x7l8K7SPLWKLWBgrXEkUkJJ8TqMzEA7fQUyKDhLGVvVEYbKn089 cZv5FXv6srbRZpvJDZhOoTDPGMJk89JPAbpgxT6xYoZnc4bBW95579TGjFLjpyM9k12c Rj9X2RzRQ5s5/+zKfvG3DmXpeR7SfnXYT8wAFL3w3g7pdynHLlAAns9a86t4k32xoNE5 yEeg== X-Forwarded-Encrypted: i=1; AJvYcCXbwrOdT2/kzMrDKeiNTjKtTPNQtll+04JVXsir747/gIN9RFhJQuGDPapjHeQX+oxMeWKueftp1w==@kvack.org X-Gm-Message-State: AOJu0YzOvoRAFFE8DSE6ENX7PavGIdpvkIkOhvcs79H7zFL4VfIe09Vw Ftja3Muou+mQjhp+8oR0910sFdkry6zw2TvZMoQPbHlA+3aiA8XqYohkNE1e1aVoFrOGoBxeVyb NnA== X-Google-Smtp-Source: AGHT+IF7dKgo6NOta05WIlCiTUUdE+0C61M7hMlod7dRoqWPYR+1EhCFaw3EWtQn1lXePDQDqo/d6R5TwFE= X-Received: from pfbcw3.prod.google.com ([2002:a05:6a00:4503:b0:725:f1d9:f706]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:4399:b0:725:8b00:167e with SMTP id d2e1a72fcca58-72abdec65e2mr31237080b3a.16.1735232867927; Thu, 26 Dec 2024 09:07:47 -0800 (PST) Date: Thu, 26 Dec 2024 09:07:09 -0800 In-Reply-To: <20241226170710.1159679-1-surenb@google.com> Mime-Version: 1.0 References: <20241226170710.1159679-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241226170710.1159679-18-surenb@google.com> Subject: [PATCH v7 17/17] docs/mm: document latest changes to vm_lock From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Queue-Id: 857FC14000E X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: dry1qq41udb1mggas44moa368dr9y85z X-HE-Tag: 1735232840-565789 X-HE-Meta: U2FsdGVkX1+3H8wh/pBc67QnoAuchUG/V8Sj5qbBvTZT7tfx51wde32/0VnfTqdUOCrWFqgAg1NOaFHSWu/nGpj1AHThexlm3PssOA/83NTeyw0v8wBAivjGMs2ggSfKcJ1QiprSil/6vsnxodeLHFx/DrSmhKg7zzZOhRrn1s5cFjoQjX03XsFpTR28RVkHtSQNJn16/lg8ohysGptY0ZboZF+Ses0fbOoFi0DAM93em+VhD8VkEfCV8n0m5we7cafG5y7DcHZxx9plLGzrzhlSTXIGekClV4vo1geI/2LxueA07EtdBXqFo4zAycw9eNUNWJl+16NS2ImGG4p3DsybWDHxB4/zNq55AnCua+lPFMWK0JMcpB0R6EUdQZXoq7GohhQECZKsD4GpXpmV4tF4JKpfbKVK6O00dAn+LMrpXNnpV3DUsIBq4fxdt+Mougp4r8RgWQuGUFf3NJHEbdlJLhp40FuHiJDbIyed0G1LJr6WNgnOvfV6TuRsGvoLq25BOviRuRxax2mT1qT/3zfO+Fk20blOMUG4TGgDApMSlnAaECOO0zUeiGwoQRGzv6fkcDWr5OZ9jjdjbS6a4DvoDFjxEb2RYcRdKe/7TKloGjjUrtAvvY0S15gNajMFaJshaBDgHUOVHZjZIPjf3XFfRAvJ5T/TtfJE5zonRHh0rVR1ZF0DN9VwJE0NcaaDTzE+/4TIsxsGyTsQ2NT7kqVCtkw0YBiUvphDqKQKo24FiEkw95PwXTHpo8nY9pDAjw3BewPuZib+D0myQ8D0wTUnEs9M7cIujh+Eiv6JhezSI3kMm16uGvo7s35jUUVSfbst5O9gcs6QzvQBXnAfGnksD5m3eB3qCbQ6jmxofyD5tMh+1HmC2UWF6Yx59U+G0SnZTiMpCy5/Ts5sZV0cpEzM78Kli0cJtrNfJ59vgVXObLuUafNwFQ30gaPRec1CZb9CTG6fgyg0ocZhL97 wskkZcT4 cHPBsE8xqnnVH6sU5xydUhvhZtVxCyUl4/aVws0uRtVytIxvMqHMN4z8B47bCGtCxICREHpevYDnXS08Og6PDAS6CicEgFMQzdnKs0Bx+fBWaPFd0HB7LF5D5LuhbgRe3tDz4C/rZaoL6qvApHjq7T7JOeYG93dgyZ4lhB/n15rJqEWG+ph1bSYUXBGDJHOw2KoIxF7s5accc28LTCEkN8CjGUQlmApAzlqTICW+SbN6kDPa6R3miAMduzlDOo2xfg25fMCK+nZM29rZTpm9WGa7YI0tUJrpU8vwZKD8y57EC2tGvbiTwTImUfh1Tw/+lxKO1BB6P8/5rtKeQYV266h9Sl/bwqD8Hu7ikqxSvkXvXl5uk9KhNHCS6YgiY7FtaOGKyyfdDez3mxlglfj+tofFROqbCFS15wKUerh/x+FT4NlapOERUuZRI4nvtKUINJ5jC8Zpq7lw/xi3lE7sWoa54DLDPMewXAUBZO+o7xpUwxopF3R7BJwAUVT1mqcCRA9bBpqyiVCNwzeA0ViOIAM9WWu03yT4L/WHEmoiDQsq1Wg4jTbYiDkX2n+NlRV3CKS9u8XFGtaoAYx2SCnj48uTUDQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.003709, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Change the documentation to reflect that vm_lock is integrated into vma and replaced with vm_refcnt. Document newly introduced vma_start_read_locked{_nested} functions. Signed-off-by: Suren Baghdasaryan --- Documentation/mm/process_addrs.rst | 44 ++++++++++++++++++------------ 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/Documentation/mm/process_addrs.rst b/Documentation/mm/process_addrs.rst index 81417fa2ed20..f573de936b5d 100644 --- a/Documentation/mm/process_addrs.rst +++ b/Documentation/mm/process_addrs.rst @@ -716,9 +716,14 @@ calls :c:func:`!rcu_read_lock` to ensure that the VMA is looked up in an RCU critical section, then attempts to VMA lock it via :c:func:`!vma_start_read`, before releasing the RCU lock via :c:func:`!rcu_read_unlock`. -VMA read locks hold the read lock on the :c:member:`!vma->vm_lock` semaphore for -their duration and the caller of :c:func:`!lock_vma_under_rcu` must release it -via :c:func:`!vma_end_read`. +In cases when the user already holds mmap read lock, :c:func:`!vma_start_read_locked` +and :c:func:`!vma_start_read_locked_nested` can be used. These functions do not +fail due to lock contention but the caller should still check their return values +in case they fail for other reasons. + +VMA read locks increment :c:member:`!vma.vm_refcnt` reference counter for their +duration and the caller of :c:func:`!lock_vma_under_rcu` must drop it via +:c:func:`!vma_end_read`. VMA **write** locks are acquired via :c:func:`!vma_start_write` in instances where a VMA is about to be modified, unlike :c:func:`!vma_start_read` the lock is always @@ -726,9 +731,9 @@ acquired. An mmap write lock **must** be held for the duration of the VMA write lock, releasing or downgrading the mmap write lock also releases the VMA write lock so there is no :c:func:`!vma_end_write` function. -Note that a semaphore write lock is not held across a VMA lock. Rather, a -sequence number is used for serialisation, and the write semaphore is only -acquired at the point of write lock to update this. +Note that when write-locking a VMA lock, the :c:member:`!vma.vm_refcnt` is temporarily +modified so that readers can detect the presense of a writer. The reference counter is +restored once the vma sequence number used for serialisation is updated. This ensures the semantics we require - VMA write locks provide exclusive write access to the VMA. @@ -738,7 +743,7 @@ Implementation details The VMA lock mechanism is designed to be a lightweight means of avoiding the use of the heavily contended mmap lock. It is implemented using a combination of a -read/write semaphore and sequence numbers belonging to the containing +reference counter and sequence numbers belonging to the containing :c:struct:`!struct mm_struct` and the VMA. Read locks are acquired via :c:func:`!vma_start_read`, which is an optimistic @@ -779,28 +784,31 @@ release of any VMA locks on its release makes sense, as you would never want to keep VMAs locked across entirely separate write operations. It also maintains correct lock ordering. -Each time a VMA read lock is acquired, we acquire a read lock on the -:c:member:`!vma->vm_lock` read/write semaphore and hold it, while checking that -the sequence count of the VMA does not match that of the mm. +Each time a VMA read lock is acquired, we increment :c:member:`!vma.vm_refcnt` +reference counter and check that the sequence count of the VMA does not match +that of the mm. -If it does, the read lock fails. If it does not, we hold the lock, excluding -writers, but permitting other readers, who will also obtain this lock under RCU. +If it does, the read lock fails and :c:member:`!vma.vm_refcnt` is dropped. +If it does not, we keep the reference counter raised, excluding writers, but +permitting other readers, who can also obtain this lock under RCU. Importantly, maple tree operations performed in :c:func:`!lock_vma_under_rcu` are also RCU safe, so the whole read lock operation is guaranteed to function correctly. -On the write side, we acquire a write lock on the :c:member:`!vma->vm_lock` -read/write semaphore, before setting the VMA's sequence number under this lock, -also simultaneously holding the mmap write lock. +On the write side, we set a bit in :c:member:`!vma.vm_refcnt` which can't be +modified by readers and wait for all readers to drop their reference count. +Once there are no readers, VMA's sequence number is set to match that of the +mm. During this entire operation mmap write lock is held. This way, if any read locks are in effect, :c:func:`!vma_start_write` will sleep until these are finished and mutual exclusion is achieved. -After setting the VMA's sequence number, the lock is released, avoiding -complexity with a long-term held write lock. +After setting the VMA's sequence number, the bit in :c:member:`!vma.vm_refcnt` +indicating a writer is cleared. From this point on, VMA's sequence number will +indicate VMA's write-locked state until mmap write lock is dropped or downgraded. -This clever combination of a read/write semaphore and sequence count allows for +This clever combination of a reference counter and sequence count allows for fast RCU-based per-VMA lock acquisition (especially on page fault, though utilised elsewhere) with minimal complexity around lock ordering.