From patchwork Sat Jan 11 04:25:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13935767 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6ED54E7719C for ; Sat, 11 Jan 2025 04:26:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0111B6B0083; Fri, 10 Jan 2025 23:26:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F02106B0085; Fri, 10 Jan 2025 23:26:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D7BE36B0088; Fri, 10 Jan 2025 23:26:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id B45B66B0083 for ; Fri, 10 Jan 2025 23:26:13 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 6411FA1092 for ; Sat, 11 Jan 2025 04:26:13 +0000 (UTC) X-FDA: 82993883826.18.EFB32A2 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf01.hostedemail.com (Postfix) with ESMTP id 837F940006 for ; Sat, 11 Jan 2025 04:26:11 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=KEqKck12; spf=pass (imf01.hostedemail.com: domain of 34vKBZwYKCBMBDAx6uz77z4x.v75416DG-553Etv3.7Az@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=34vKBZwYKCBMBDAx6uz77z4x.v75416DG-553Etv3.7Az@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736569571; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1I/dUK+s3BXYUQU4Ou354w8mERlKGBrfGL48eT2fYzA=; b=OxVj2niB01T7M3cV73zf9O//iq3YYHU2zrPAXzqXiR5MsWLv0MhiuNfHsxh0JXlKT/A0Vh qBqOaNuVcWvh1H0zZV7a+pYqNc+bkSjWDzL+N/VWcQ3o/z3917/EeYpDOZMEC9ieRqXLGm LB7acqde1Qc+39C7Dn0OKkn6FZRKdW0= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=KEqKck12; spf=pass (imf01.hostedemail.com: domain of 34vKBZwYKCBMBDAx6uz77z4x.v75416DG-553Etv3.7Az@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=34vKBZwYKCBMBDAx6uz77z4x.v75416DG-553Etv3.7Az@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736569571; a=rsa-sha256; cv=none; b=HNcU1FaFRgNWNMmMly9H7O81w02FcGEK3hUumm/7TmfYoNRUiJ+Fl6lneiJ4J6cEsw/5io nKhXgnqSSzwsQW3pHNiwiLyK6uIk90/lAej9h3aSeX2VRoGWx0+iy5kV8LsSsDqBkMp7fL RZlDV+7iTIuU8LMLY6PfyzurSVU+ZIw= Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-21640607349so58336975ad.0 for ; Fri, 10 Jan 2025 20:26:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736569570; x=1737174370; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=1I/dUK+s3BXYUQU4Ou354w8mERlKGBrfGL48eT2fYzA=; b=KEqKck12co6FWbvlNzdmITnNpguBq5ftLoA9MOVJsMnDUH2p34P0a2FZ84DioS+M9w xJdE7B7/qwVbmJQRXI8mA8tb9YddhcXy1rq/Jyb80fvgimE7Tomfl2pogAvtQLcMFeq8 3wMjtbnhSCJ00NfxMr09Ebc09jyElGZkkgmqpBx+fI4yhc3iygMjPmCrEka3Aegha+/u qNb5O2NRSN0LHZ6cCa5m3U2YMRKagqCuwQCwX3DgNZUWLOKIx2U/GLoxlIyqlAwbZv8w 9N0J6m2K+/bXXGlsm4UgmUH9ucVz17DEUPpKpFraT8VahhNigzGLnNt8Dlke+vd1mTkG Ie0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736569570; x=1737174370; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1I/dUK+s3BXYUQU4Ou354w8mERlKGBrfGL48eT2fYzA=; b=xBu4+YrhYEhgaq+ogbewvmH9EI8adZeRVIWbaGBlsf+M1CRWD6XMqevfzdvkOGfKkw U0BqdJ7iXaoZbcm1trTFcX/0trdGmehm2P/EVQ/6puk77Pk7ZCC+mTXFxoWOrrRHf4Om RfWLOIjWrZukc/emuKlXZvS1PgCw5Hr0VxXWfWuLzfixQh4J3XYdkxr/NMjwwiLgZ1R1 JGJFpbLX11tpCXSArhHvL7Knv5mjA+N2nf4jYrfK/xVscvwLfbH//c2DhGSrh6MfrC9m gBzbyvdsp/RD5CVGsOLA7lt2GrwBxjmmxkjN4qRDgiaQE7rxzrTV/F8RH/sG9qRBZd6v UL3w== X-Forwarded-Encrypted: i=1; AJvYcCXJtUhn9eeUA13Di409vK4vTO817s8vA6FLtVAJvVPGKLOTJoqf6p11m1a6CDdj89OUTzVbvk6Y8w==@kvack.org X-Gm-Message-State: AOJu0Yxt8AfH5HTYV/HPvAtRL5/RAaVI9azx8hBfW26LimqPQpotHLYs VOhokVZ5nbj193PT9Vh7LpOQ2THenR8Tve97oMJZZ7EWSqWXIT4DT4pDbZz/1Xah/AweRd9jFp0 odQ== X-Google-Smtp-Source: AGHT+IH+ZTrdGgYL0EsT9nxpjvST8Wb4PiTOAlOCLTZpVYAikP8aBq9n9KO+/zLUPpIwa+3cew7Rdi532wM= X-Received: from plrf18.prod.google.com ([2002:a17:902:ab92:b0:216:61ba:610]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:d58a:b0:216:6c77:7bbb with SMTP id d9443c01a7336-21a83f573ccmr189000015ad.17.1736569570248; Fri, 10 Jan 2025 20:26:10 -0800 (PST) Date: Fri, 10 Jan 2025 20:25:48 -0800 In-Reply-To: <20250111042604.3230628-1-surenb@google.com> Mime-Version: 1.0 References: <20250111042604.3230628-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250111042604.3230628-2-surenb@google.com> Subject: [PATCH v9 01/17] mm: introduce vma_start_read_locked{_nested} helpers From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, david.laight.linux@gmail.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com, "Liam R. Howlett" X-Rspamd-Queue-Id: 837F940006 X-Rspamd-Server: rspam12 X-Stat-Signature: n68g65uszoxwqj134zjdr91gxcao4kc7 X-Rspam-User: X-HE-Tag: 1736569571-558434 X-HE-Meta: U2FsdGVkX1/wbuDwV1ymVoyoN8td7iQynEabxJ7QJeSJjTnOREvDEAD7mXx7QBF+x0mfkoKI/L8eVOIV7m7QIVVsgjdv4EGRnVvt6vGfOBrchUPSHXi+IAiEZ99Uu95Ome9Z+nDO/RbpClXpmuMDV1TzflvEIzcTsmQ1J6GZcTrOIyK1ZlI44zyFm15kTWYpNxJgiblrsXhxBBayNKhky36wOwuTOVRH4ZYzNdnFZ16XLbilnXKTdTvCoa+gQRqq+noRqRUzvUk+YVWAdZbh2+cGgucfaJko+niirypffoOnSAs8D6GD4q3QVAc8kcAH/yDxrlGyD7t8QAhziJ8stlek7LTANHYCz+t8qN3RgXnpf8RkiMKzlUjO8T33q6F9R9whjwW3BMkbsZ12caOLNawlPQABzaNJ6aeJ3H2jJRvjoAupZH9KNEEdOEZLgVWGVlFEXm5kMqawhl/ivSn3V0LEycTOmzghZI94er2VSwCosRt8TCCRayiP/BNXvyJ80LH179vWoRM663FKJS65nlHLadzaR/gCl+N8dzjS5mEJ6gVe0fzYLMoM1BmXz1Ot3HpoL9kuZsZcT9ajM6T3bOZZGHwHak5lk60aoWi4Mrsu6D2XHY5ss+dHyc3x12K11Dm0ztW1tWB7aUnYhW+pWDM8wahaZDoDQAyRTk5EmJW3vA5lNg4Dy6RlUSv5ECVmEB5URj8nofnb+3b3v3f2Af10bgdO8gxx3Mrzt+dsLBv9hg2o1n9d6HvnWqs10Fko5bHo/KeFxuTqE7RfEdvfTeAeJJWOX/tF6iAFCMxF8NUCo6n+meWvZl8E73o+73DREZXQ2q9K2xQv3DuQqW7W2iPwd9fF8DzqOl9sQJyVo8NLWwjyY+mPUh6ZArBPY1k2FB4lCpGTTV70KvCU34FsEu3czNHzYR3OvqJFUxJ0BCLkhsZfkcSbQAVQVNJNxwiMsKJULh/eTyjNx7OvA1M B1bSyDV5 FQ9JRlZmet5Fp3FzGaAJWUVa36mOlxhuGDlMgjfz3Z/2rpXMVPBcdqywK47TJbznDg6WkPEI+vAWrhiTASbU5Te2I9q7RiYwF+CGX65v2leegyR9lu2ILS5VqZp/R2YnB9Sbku2DxqSMU90IKX+h2ap5BV1f5r0iuzJd5JNuIquZ9D4C7l8T/gYYMcilrkGW42V2HqeH1C56y4LGa5R3SiSJ89OsFq/r7Z31plhwP1X0BFxx2vLY9kUkvI5ghCRdim6D24bLdIzfa3/9NDKuDuflTTlJ6jzNxIEZ0P02B6A4xTQEkypNpOdecFRYvy8QFwa7O+hMgou6w5SIeoxfb5oKWblG9+REGTX7upfCUP7w8MfL2d7Tfmm/A3kOERaLcpY1+xxweTBBUL5+zmFR92V4VkjUEBBvzQE/MZ//7449FTaN7E0i4xsNU9Mmm54CBsIjTnYLFGDXh0lkTeIjSieRr62toX9wuZ0TVjqjePLqizCvL0BOpXy0hXWM9V8X3OhDkQODyV/kSPPFQU0FgoXtyG0Qv6Sk8KFTI1ZMTxLRw3oFlYdKmRJaIyz+/idrZb7t9YUZSOZF2DOYwP5d17YVDXm1p8MUrCQkGyrKFmYc08A6IbXtXRt5glnoibIMDvKKQTLgMhNJnABNnaP4ckGoe1LVbl2nKdpGmGGFb+EnhWg53ZBzR7oAxALEV51k0qnllolvZmcVxlflrPkZUYEtj+yhEAwdA2i58sxhxH3XkSKM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Introduce helper functions which can be used to read-lock a VMA when holding mmap_lock for read. Replace direct accesses to vma->vm_lock with these new helpers. Signed-off-by: Suren Baghdasaryan Reviewed-by: Lorenzo Stoakes Reviewed-by: Davidlohr Bueso Reviewed-by: Shakeel Butt Reviewed-by: Vlastimil Babka Reviewed-by: Liam R. Howlett --- include/linux/mm.h | 24 ++++++++++++++++++++++++ mm/userfaultfd.c | 22 +++++----------------- 2 files changed, 29 insertions(+), 17 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 8483e09aeb2c..1c0250c187f6 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -735,6 +735,30 @@ static inline bool vma_start_read(struct vm_area_struct *vma) return true; } +/* + * Use only while holding mmap read lock which guarantees that locking will not + * fail (nobody can concurrently write-lock the vma). vma_start_read() should + * not be used in such cases because it might fail due to mm_lock_seq overflow. + * This functionality is used to obtain vma read lock and drop the mmap read lock. + */ +static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass) +{ + mmap_assert_locked(vma->vm_mm); + down_read_nested(&vma->vm_lock->lock, subclass); +} + +/* + * Use only while holding mmap read lock which guarantees that locking will not + * fail (nobody can concurrently write-lock the vma). vma_start_read() should + * not be used in such cases because it might fail due to mm_lock_seq overflow. + * This functionality is used to obtain vma read lock and drop the mmap read lock. + */ +static inline void vma_start_read_locked(struct vm_area_struct *vma) +{ + mmap_assert_locked(vma->vm_mm); + down_read(&vma->vm_lock->lock); +} + static inline void vma_end_read(struct vm_area_struct *vma) { rcu_read_lock(); /* keeps vma alive till the end of up_read */ diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index af3dfc3633db..4527c385935b 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -84,16 +84,8 @@ static struct vm_area_struct *uffd_lock_vma(struct mm_struct *mm, mmap_read_lock(mm); vma = find_vma_and_prepare_anon(mm, address); - if (!IS_ERR(vma)) { - /* - * We cannot use vma_start_read() as it may fail due to - * false locked (see comment in vma_start_read()). We - * can avoid that by directly locking vm_lock under - * mmap_lock, which guarantees that nobody can lock the - * vma for write (vma_start_write()) under us. - */ - down_read(&vma->vm_lock->lock); - } + if (!IS_ERR(vma)) + vma_start_read_locked(vma); mmap_read_unlock(mm); return vma; @@ -1491,14 +1483,10 @@ static int uffd_move_lock(struct mm_struct *mm, mmap_read_lock(mm); err = find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); if (!err) { - /* - * See comment in uffd_lock_vma() as to why not using - * vma_start_read() here. - */ - down_read(&(*dst_vmap)->vm_lock->lock); + vma_start_read_locked(*dst_vmap); if (*dst_vmap != *src_vmap) - down_read_nested(&(*src_vmap)->vm_lock->lock, - SINGLE_DEPTH_NESTING); + vma_start_read_locked_nested(*src_vmap, + SINGLE_DEPTH_NESTING); } mmap_read_unlock(mm); return err; From patchwork Sat Jan 11 04:25:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13935768 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB9D9E7719A for ; Sat, 11 Jan 2025 04:26:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 408076B0088; Fri, 10 Jan 2025 23:26:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 36B746B0089; Fri, 10 Jan 2025 23:26:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 16F516B008C; Fri, 10 Jan 2025 23:26:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E02936B0088 for ; Fri, 10 Jan 2025 23:26:15 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 93EF9B0736 for ; Sat, 11 Jan 2025 04:26:15 +0000 (UTC) X-FDA: 82993883910.29.5B4FB9C Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf10.hostedemail.com (Postfix) with ESMTP id BDF93C0005 for ; Sat, 11 Jan 2025 04:26:13 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=PB8r6rzA; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf10.hostedemail.com: domain of 35PKBZwYKCBUDFCz8w19916z.x97638FI-775Gvx5.9C1@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=35PKBZwYKCBUDFCz8w19916z.x97638FI-775Gvx5.9C1@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736569573; a=rsa-sha256; cv=none; b=fx6odvxgWMib9uooJaPT9Z50aL4qp3e/Rfre6ElB14PB07dXDJO2X1zd3JeuBFQ90Mu0ZQ GpXciRoJWxRkrCOTXrJtC4g1MaTig3GYPic6WykYENxoX8aATiJZCc9BNpZp9Ss7SvX9Z4 nyTJL1RsDNpK9tawOSrTMggM9ZWJgcY= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=PB8r6rzA; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf10.hostedemail.com: domain of 35PKBZwYKCBUDFCz8w19916z.x97638FI-775Gvx5.9C1@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=35PKBZwYKCBUDFCz8w19916z.x97638FI-775Gvx5.9C1@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736569573; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5pJF9dh9SQ99ZsNMTOkH4coKtI9mgkiS0QQ1s/qtBA0=; b=bqbLCMn7jJcL4J9bqN/RG9Q9c/s3/SSJN11O3Y4MKyMwzxI3md0Ak8KnH2sigwGQT/ecF2 YE5o+W+IJcLkF7X9gDDPn1Vk6U8UAOAR6K3OKiZC+J+U9nN7HcJ65GtGB7qrIGwB24m5Kl DbfeCrAJb49UF9QGbyMw0mMt0AbwHYw= Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ee5616e986so6890455a91.2 for ; Fri, 10 Jan 2025 20:26:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736569572; x=1737174372; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5pJF9dh9SQ99ZsNMTOkH4coKtI9mgkiS0QQ1s/qtBA0=; b=PB8r6rzAYemAvhy7o0KJ0BpyIpMEvCepVa+1gVZZXCFFfSWF6AogwQ7WoKzpbWgzMs TuhDIPM7pv721mINIW7zkEsPRY0dMbY3NUrpW/Mbe2hsTGAEI+pDyvXgfteMogCAsEz5 4nn9ivqbVGVFJOBM/JnXZCjMF1GBaJpa6NzjBFF14pv9q/ktixGgNn/XEgWqwFqRAprT piVZeL0A5k78OAuN8t+ioyaphreoFDvw6J6SWezlAWaw1L0fSDsRK2WIWTOTKApfcLoS 2dx8udzrVG+1GnqTseuNhSe7xwZnzkfOuKQ3wFReDTHhxO2BtRlzMzweVX+asyKNjSTh EW1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736569572; x=1737174372; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5pJF9dh9SQ99ZsNMTOkH4coKtI9mgkiS0QQ1s/qtBA0=; b=GTdoOdzCRwlUuVijPav8FQwnjGZm3V1T7mqkZJjOEe3D4fKNTKdUUn5Pa2jYKZ2Jok PWNJ0gR75JIcfTzfCjfDeqFzbKNWDMGDFxzCDMJP43HC7TUn3ZEl4Wm9UHonCS9HvqcK MVVYDZGa/mE2URuHqRB2auHgGNHOQYQ2ueEz24LTFyn3sWgEBi8rlsTpHLsOiFD9vqLz 3hJzw8Pfl1U82TmILm1sj6Knq9sU3qeQLj2naR5Fw/jAE2eR6pQOE0/idOpY02MJ8rg9 b5zKEMk+kfgMJYAbaIw0DWR1M98gtZeWpljp5ZgjVAdwBDTn7V7nxwrPkiFt+VzrI6kd mjTQ== X-Forwarded-Encrypted: i=1; AJvYcCXGLOIgts64hTBcpRLxTJp0G+sqrUaZxUqDuvlk186RBRumRpz9lSDuefZC0lyuqnR9KfK8wt91wQ==@kvack.org X-Gm-Message-State: AOJu0YzCWNrqVKhM/BJ1jXYLtvphtOq0R1ShQFFs9OgvHNejKuY7T1Ra ChTjBCRqm4T3uRiTg4HlF8Euc/8DKlarNL1/F/Ee0n8Sjxj37NgRAi4yARon032MSNIoIfuc+m/ v4w== X-Google-Smtp-Source: AGHT+IFh7TfnUZiu/6FpeM7mSX4ZUoHPFADM7v9c7q9mHn92V8uTq9qQpCpJ89f5SPaq/DM5hqZIsT2lYpA= X-Received: from pjbdj16.prod.google.com ([2002:a17:90a:d2d0:b0:2ee:4826:cae3]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3d09:b0:2ee:cddd:2454 with SMTP id 98e67ed59e1d1-2f548f39a8amr20621472a91.15.1736569572290; Fri, 10 Jan 2025 20:26:12 -0800 (PST) Date: Fri, 10 Jan 2025 20:25:49 -0800 In-Reply-To: <20250111042604.3230628-1-surenb@google.com> Mime-Version: 1.0 References: <20250111042604.3230628-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250111042604.3230628-3-surenb@google.com> Subject: [PATCH v9 02/17] mm: move per-vma lock into vm_area_struct From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, david.laight.linux@gmail.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com, "Liam R. Howlett" X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: BDF93C0005 X-Stat-Signature: e6jya3e4cujqbuqot9exz7n68btkpeym X-Rspam-User: X-HE-Tag: 1736569573-338842 X-HE-Meta: U2FsdGVkX1/v3N9IyQL3OUf5/sbt5CkgNjqR7w3BvIIVGvGfN+pUhMcj0RPifX2UUEaS6S0C95zWLhJTKv2rdDESbgEBlkppw3+c5mp29+GeX507hm4ytYtbi2DZqWjD4hWWlTEsYSuQBDVHd8cdqU7y/iHpGcEW51ruQPbTO91peta6rtaEHUnxaotqVpzCftnbjj+Hp9kSMR23IT/dnaW6W9l9CPceu0/I5zezSi1gsq/9gmabTCa4s83PzwEoSQafuHxHyMCVvrGrn/y2L3Rw5n0yGu/08fuNjkW/S3mspHGKNM8Y25ToQj51p7a3FRxSEKbGTizYMV5W1pnmz9XtIl0fAwu0wk5W+t4iiK9keC/uaZkDttEZF4Z9WKBorLF5zLcczqW54IhV5DcHMRqNnL3rh6a1neMC04ti/H2g2eGva3LMKAn0XkzLM47VQYo6iMUYBqSzofYtsJFmMxsMNCLvb2LahIui8eaUZCy8CbSEbYtI59YLtfYh0qxJGK7wjAIgd2Wzm/o5kXVZRVwDSE0K+OF2quZEZn8hz4P51er+tZzddu2iP281zifuFrULG8AdhN+dOcDLqujKgVS+OP2TGQXs+eSPch9Lzcm0cmFI2z+5g+Cqh4bmD8UQFwDmMVvQNOI/qow7rN51GXdeT3YQNu2R4brQgdp0qxVO+JV6pb8ZzImLfAMFdcqCpv9edIfWYsJz87lt59J0VpEUbMMhRz75eOGrHXYP5z3n6OELgAXASWGlPcbOlXMfMPGXLXBOTtEx6a99V7IIroL+nxWixzLR0Rpqx8U0p4b+NH4T7zNcnoD58E4YYgB0fryzWPwQ/c8g9s8BdNP2t/jwJ0UBlB5Q/ePy0F7q1Nwbnbqv5nITO8bQY3r4lj41T0/2jpKgXnJW4hdDjXoB5zODkrOwbbGKcYPml9vCfcNPSGCoqkwe66ly8UBt0cmEgoC09Vj24GzL2ZGpEC+ XwGwQK0P mS4AnHevoGlysAYEz60Z3R/XSx9qIefYWOgtCQWD+U9agQ8J2PnVsXu4ESPbd+jBcGJo4VHZJYb2y2rRD2374gajpWH8yJtQMgmY23TsBvoRDBKM9IKx0tB8zli9iB0J031Bq0cXCZl3yTKJPdNphN+ukE/gX5XEl1PGLVvL6+7kM6OtDDmjzgAPHWxldIq0Ks8UjJtYtl++pR/BIZQdW8hnaB33c8ltGeMWw3fKYjMnpkN0jjzgnXGdsPL04mHBjD0p713NzggrjSM4Ni5VfuUvYOxUPVocv+HMToksgrM3ppb2t4niUny6ik4eF8C1BaSfZAX0HZMadoGW3lqTo1Kor6N/wHTlZayNmGulDD8aHmTqueTeM637Hjg9vfnObILeQXzKJEpyYB76JR6HHHO+AaaLRWcer1Gd2gWTv7sOxQuwq7iA3eOdQ0BPL8nA0VbTxe1Nj6T5M1GLDrSo3NZOZJuOuSWLlqBcNFXVPb4rtv4xe1fXJ5Tj/5i6uJjDV6+hPGwlmMHFFpGWqxvgLzu12G62wq4Bo/bn5OVff85ofNWi8hjZ6vFcpYakFbke+rDhevc0DlzSQOuj7/7VjjO1R7Xt8c8Nl+O40zJx29TapajZAksRm398yM/21AhZRnieUWULwzCkltY5OGahhMQWvaXuDrM65X3iI+xCyG7Hz8evj1jW9AcekaCJSa4y1AaF3FVWyYz0WdED/p3j3gPS0W4LZ61kLpTdN X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Back when per-vma locks were introduces, vm_lock was moved out of vm_area_struct in [1] because of the performance regression caused by false cacheline sharing. Recent investigation [2] revealed that the regressions is limited to a rather old Broadwell microarchitecture and even there it can be mitigated by disabling adjacent cacheline prefetching, see [3]. Splitting single logical structure into multiple ones leads to more complicated management, extra pointer dereferences and overall less maintainable code. When that split-away part is a lock, it complicates things even further. With no performance benefits, there are no reasons for this split. Merging the vm_lock back into vm_area_struct also allows vm_area_struct to use SLAB_TYPESAFE_BY_RCU later in this patchset. Move vm_lock back into vm_area_struct, aligning it at the cacheline boundary and changing the cache to be cacheline-aligned as well. With kernel compiled using defconfig, this causes VMA memory consumption to grow from 160 (vm_area_struct) + 40 (vm_lock) bytes to 256 bytes: slabinfo before: ... : ... vma_lock ... 40 102 1 : ... vm_area_struct ... 160 51 2 : ... slabinfo after moving vm_lock: ... : ... vm_area_struct ... 256 32 2 : ... Aggregate VMA memory consumption per 1000 VMAs grows from 50 to 64 pages, which is 5.5MB per 100000 VMAs. Note that the size of this structure is dependent on the kernel configuration and typically the original size is higher than 160 bytes. Therefore these calculations are close to the worst case scenario. A more realistic vm_area_struct usage before this change is: ... : ... vma_lock ... 40 102 1 : ... vm_area_struct ... 176 46 2 : ... Aggregate VMA memory consumption per 1000 VMAs grows from 54 to 64 pages, which is 3.9MB per 100000 VMAs. This memory consumption growth can be addressed later by optimizing the vm_lock. [1] https://lore.kernel.org/all/20230227173632.3292573-34-surenb@google.com/ [2] https://lore.kernel.org/all/ZsQyI%2F087V34JoIt@xsang-OptiPlex-9020/ [3] https://lore.kernel.org/all/CAJuCfpEisU8Lfe96AYJDZ+OM4NoPmnw9bP53cT_kbfP_pR+-2g@mail.gmail.com/ Signed-off-by: Suren Baghdasaryan Reviewed-by: Lorenzo Stoakes Reviewed-by: Shakeel Butt Reviewed-by: Vlastimil Babka Reviewed-by: Liam R. Howlett --- include/linux/mm.h | 28 ++++++++++-------- include/linux/mm_types.h | 6 ++-- kernel/fork.c | 49 ++++---------------------------- tools/testing/vma/vma_internal.h | 33 +++++---------------- 4 files changed, 32 insertions(+), 84 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 1c0250c187f6..ed739406b0a7 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -697,6 +697,12 @@ static inline void vma_numab_state_free(struct vm_area_struct *vma) {} #endif /* CONFIG_NUMA_BALANCING */ #ifdef CONFIG_PER_VMA_LOCK +static inline void vma_lock_init(struct vm_area_struct *vma) +{ + init_rwsem(&vma->vm_lock.lock); + vma->vm_lock_seq = UINT_MAX; +} + /* * Try to read-lock a vma. The function is allowed to occasionally yield false * locked result to avoid performance overhead, in which case we fall back to @@ -714,7 +720,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence)) return false; - if (unlikely(down_read_trylock(&vma->vm_lock->lock) == 0)) + if (unlikely(down_read_trylock(&vma->vm_lock.lock) == 0)) return false; /* @@ -729,7 +735,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * This pairs with RELEASE semantics in vma_end_write_all(). */ if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { - up_read(&vma->vm_lock->lock); + up_read(&vma->vm_lock.lock); return false; } return true; @@ -744,7 +750,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass) { mmap_assert_locked(vma->vm_mm); - down_read_nested(&vma->vm_lock->lock, subclass); + down_read_nested(&vma->vm_lock.lock, subclass); } /* @@ -756,13 +762,13 @@ static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int static inline void vma_start_read_locked(struct vm_area_struct *vma) { mmap_assert_locked(vma->vm_mm); - down_read(&vma->vm_lock->lock); + down_read(&vma->vm_lock.lock); } static inline void vma_end_read(struct vm_area_struct *vma) { rcu_read_lock(); /* keeps vma alive till the end of up_read */ - up_read(&vma->vm_lock->lock); + up_read(&vma->vm_lock.lock); rcu_read_unlock(); } @@ -791,7 +797,7 @@ static inline void vma_start_write(struct vm_area_struct *vma) if (__is_vma_write_locked(vma, &mm_lock_seq)) return; - down_write(&vma->vm_lock->lock); + down_write(&vma->vm_lock.lock); /* * We should use WRITE_ONCE() here because we can have concurrent reads * from the early lockless pessimistic check in vma_start_read(). @@ -799,7 +805,7 @@ static inline void vma_start_write(struct vm_area_struct *vma) * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. */ WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); - up_write(&vma->vm_lock->lock); + up_write(&vma->vm_lock.lock); } static inline void vma_assert_write_locked(struct vm_area_struct *vma) @@ -811,7 +817,7 @@ static inline void vma_assert_write_locked(struct vm_area_struct *vma) static inline void vma_assert_locked(struct vm_area_struct *vma) { - if (!rwsem_is_locked(&vma->vm_lock->lock)) + if (!rwsem_is_locked(&vma->vm_lock.lock)) vma_assert_write_locked(vma); } @@ -844,6 +850,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, #else /* CONFIG_PER_VMA_LOCK */ +static inline void vma_lock_init(struct vm_area_struct *vma) {} static inline bool vma_start_read(struct vm_area_struct *vma) { return false; } static inline void vma_end_read(struct vm_area_struct *vma) {} @@ -878,10 +885,6 @@ static inline void assert_fault_locked(struct vm_fault *vmf) extern const struct vm_operations_struct vma_dummy_vm_ops; -/* - * WARNING: vma_init does not initialize vma->vm_lock. - * Use vm_area_alloc()/vm_area_free() if vma needs locking. - */ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) { memset(vma, 0, sizeof(*vma)); @@ -890,6 +893,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) INIT_LIST_HEAD(&vma->anon_vma_chain); vma_mark_detached(vma, false); vma_numab_state_init(vma); + vma_lock_init(vma); } /* Use when VMA is not part of the VMA tree and needs no locking */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5f1b2dc788e2..6573d95f1d1e 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -730,8 +730,6 @@ struct vm_area_struct { * slowpath. */ unsigned int vm_lock_seq; - /* Unstable RCU readers are allowed to read this. */ - struct vma_lock *vm_lock; #endif /* @@ -784,6 +782,10 @@ struct vm_area_struct { struct vma_numab_state *numab_state; /* NUMA Balancing state */ #endif struct vm_userfaultfd_ctx vm_userfaultfd_ctx; +#ifdef CONFIG_PER_VMA_LOCK + /* Unstable RCU readers are allowed to read this. */ + struct vma_lock vm_lock ____cacheline_aligned_in_smp; +#endif } __randomize_layout; #ifdef CONFIG_NUMA diff --git a/kernel/fork.c b/kernel/fork.c index ded49f18cd95..40a8e615499f 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -436,35 +436,6 @@ static struct kmem_cache *vm_area_cachep; /* SLAB cache for mm_struct structures (tsk->mm) */ static struct kmem_cache *mm_cachep; -#ifdef CONFIG_PER_VMA_LOCK - -/* SLAB cache for vm_area_struct.lock */ -static struct kmem_cache *vma_lock_cachep; - -static bool vma_lock_alloc(struct vm_area_struct *vma) -{ - vma->vm_lock = kmem_cache_alloc(vma_lock_cachep, GFP_KERNEL); - if (!vma->vm_lock) - return false; - - init_rwsem(&vma->vm_lock->lock); - vma->vm_lock_seq = UINT_MAX; - - return true; -} - -static inline void vma_lock_free(struct vm_area_struct *vma) -{ - kmem_cache_free(vma_lock_cachep, vma->vm_lock); -} - -#else /* CONFIG_PER_VMA_LOCK */ - -static inline bool vma_lock_alloc(struct vm_area_struct *vma) { return true; } -static inline void vma_lock_free(struct vm_area_struct *vma) {} - -#endif /* CONFIG_PER_VMA_LOCK */ - struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) { struct vm_area_struct *vma; @@ -474,10 +445,6 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) return NULL; vma_init(vma, mm); - if (!vma_lock_alloc(vma)) { - kmem_cache_free(vm_area_cachep, vma); - return NULL; - } return vma; } @@ -496,10 +463,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) * will be reinitialized. */ data_race(memcpy(new, orig, sizeof(*new))); - if (!vma_lock_alloc(new)) { - kmem_cache_free(vm_area_cachep, new); - return NULL; - } + vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); vma_numab_state_init(new); dup_anon_vma_name(orig, new); @@ -511,7 +475,6 @@ void __vm_area_free(struct vm_area_struct *vma) { vma_numab_state_free(vma); free_anon_vma_name(vma); - vma_lock_free(vma); kmem_cache_free(vm_area_cachep, vma); } @@ -522,7 +485,7 @@ static void vm_area_free_rcu_cb(struct rcu_head *head) vm_rcu); /* The vma should not be locked while being destroyed. */ - VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock->lock), vma); + VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock.lock), vma); __vm_area_free(vma); } #endif @@ -3188,11 +3151,9 @@ void __init proc_caches_init(void) sizeof(struct fs_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, NULL); - - vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_ACCOUNT); -#ifdef CONFIG_PER_VMA_LOCK - vma_lock_cachep = KMEM_CACHE(vma_lock, SLAB_PANIC|SLAB_ACCOUNT); -#endif + vm_area_cachep = KMEM_CACHE(vm_area_struct, + SLAB_HWCACHE_ALIGN|SLAB_NO_MERGE|SLAB_PANIC| + SLAB_ACCOUNT); mmap_init(); nsproxy_cache_init(); } diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index 2404347fa2c7..96aeb28c81f9 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -274,10 +274,10 @@ struct vm_area_struct { /* * Can only be written (using WRITE_ONCE()) while holding both: * - mmap_lock (in write mode) - * - vm_lock->lock (in write mode) + * - vm_lock.lock (in write mode) * Can be read reliably while holding one of: * - mmap_lock (in read or write mode) - * - vm_lock->lock (in read or write mode) + * - vm_lock.lock (in read or write mode) * Can be read unreliably (using READ_ONCE()) for pessimistic bailout * while holding nothing (except RCU to keep the VMA struct allocated). * @@ -286,7 +286,7 @@ struct vm_area_struct { * slowpath. */ unsigned int vm_lock_seq; - struct vma_lock *vm_lock; + struct vma_lock vm_lock; #endif /* @@ -463,17 +463,10 @@ static inline struct vm_area_struct *vma_next(struct vma_iterator *vmi) return mas_find(&vmi->mas, ULONG_MAX); } -static inline bool vma_lock_alloc(struct vm_area_struct *vma) +static inline void vma_lock_init(struct vm_area_struct *vma) { - vma->vm_lock = calloc(1, sizeof(struct vma_lock)); - - if (!vma->vm_lock) - return false; - - init_rwsem(&vma->vm_lock->lock); + init_rwsem(&vma->vm_lock.lock); vma->vm_lock_seq = UINT_MAX; - - return true; } static inline void vma_assert_write_locked(struct vm_area_struct *); @@ -496,6 +489,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); vma_mark_detached(vma, false); + vma_lock_init(vma); } static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) @@ -506,10 +500,6 @@ static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) return NULL; vma_init(vma, mm); - if (!vma_lock_alloc(vma)) { - free(vma); - return NULL; - } return vma; } @@ -522,10 +512,7 @@ static inline struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) return NULL; memcpy(new, orig, sizeof(*new)); - if (!vma_lock_alloc(new)) { - free(new); - return NULL; - } + vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); return new; @@ -695,14 +682,8 @@ static inline void mpol_put(struct mempolicy *) { } -static inline void vma_lock_free(struct vm_area_struct *vma) -{ - free(vma->vm_lock); -} - static inline void __vm_area_free(struct vm_area_struct *vma) { - vma_lock_free(vma); free(vma); } From patchwork Sat Jan 11 04:25:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13935769 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B36DE7719C for ; Sat, 11 Jan 2025 04:26:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 170416B008C; Fri, 10 Jan 2025 23:26:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0FBFD6B0092; Fri, 10 Jan 2025 23:26:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D2EFC6B0093; Fri, 10 Jan 2025 23:26:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A99786B008C for ; Fri, 10 Jan 2025 23:26:17 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 646C71610B7 for ; Sat, 11 Jan 2025 04:26:17 +0000 (UTC) X-FDA: 82993883994.18.B25076C Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf23.hostedemail.com (Postfix) with ESMTP id 9858714000D for ; Sat, 11 Jan 2025 04:26:15 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=cA7O7rNL; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf23.hostedemail.com: domain of 35vKBZwYKCBcFHE1Ay3BB381.zB985AHK-997Ixz7.BE3@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=35vKBZwYKCBcFHE1Ay3BB381.zB985AHK-997Ixz7.BE3@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736569575; a=rsa-sha256; cv=none; b=a3fwABWxNMf7yQ36oIvPJqleVAn1EO6Bm5Tu72E+FOAEY2wY1gvPsCfOx/iurMLQggxiQD 0MisCmiKC+oNY5e0ja0y55yq8+QFr1LCmiQhZC3Vx39DOl+psGmtGwYMdK9Bbde5v4LAvb bAJuYwnPsYNXNK2WFFEDyBJrKAPfemE= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=cA7O7rNL; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf23.hostedemail.com: domain of 35vKBZwYKCBcFHE1Ay3BB381.zB985AHK-997Ixz7.BE3@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=35vKBZwYKCBcFHE1Ay3BB381.zB985AHK-997Ixz7.BE3@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736569575; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SElJeL0WT8TUy5A74YFI2zLpX797Z27dhV0LPxWhKd0=; b=8X4hMdXN42y6UE061gymBoXeCokudrB5xDBaFjVGJgG+WmvV04W1hqs/rJgXABaYEWw/t+ jfmtdz9fFbzv4M8ZL2h61I4rjx0+yqrBCouIhzIMdCJOG8fMdE8X7X++y4Yf4+kXyWFpoR 5IfYTUaBbJLYbEOHIHW2+qxQCkZl+4Y= Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-216717543b7so68519465ad.0 for ; Fri, 10 Jan 2025 20:26:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736569574; x=1737174374; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=SElJeL0WT8TUy5A74YFI2zLpX797Z27dhV0LPxWhKd0=; b=cA7O7rNLNbVtbprMSWBkBlHYALbVgURmZGgWQtnq3I8ZalYtUpVjfVKHXEyDcwkBCE RnPZk3G1NyfHs3a0IMMaQhV7V2sQ7lzMu4ax0hQDohnFq88RZpWoj8f6ZsbIxcv3awt4 pOPXt7FVt22ZlQTiVfaZ5+SNl2/74HzcGcLwJpfl3AAEWsJuy3zAF2KvwrtW4l7ToZ7M 4cFrzrQGzJT7Qy+nxLNsZ26HIUqdR7xzVNr3W8jb6sHpucjefGvqkQ/nt4v3muHw13Oi wogM+hfP8uYUSeCp5gf6ghAsQ1slEYPxAM5AVsZmkK5MNOOask/kDRHjlcuUgyTIfVZd 327g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736569574; x=1737174374; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=SElJeL0WT8TUy5A74YFI2zLpX797Z27dhV0LPxWhKd0=; b=A2Q0p5ZSSGBGN50JVOVwlQPobT2N930kbAr0tMEel5JxN//SGjqaC0DgAGyH4/dqf5 BiJqf08eaReOIV1qr0wws5bxhvZbwrttLxrRquol5rg7uDLxaQbYw5snN8L6M5JB7E/L X7J8iySDf13JOj8P5mohnaTwsta8fPa5DubpGtvhoVmZCtccQkW4HfV7YTTVHxglnEWl 1tl+jQHNJSRG9whDgjY0zaVRrcplx8irdB17HAdIg8Ppjx7rqLxvaVbL53w/rFfjwp0E cFKMNL8BVzj6JJattYZxQ7OQujRTcNc8956f+dmErZIKOe5OuOt9bgJAkXnyEbcjS1y2 pTSA== X-Forwarded-Encrypted: i=1; AJvYcCWjgIEnjPj+/+BrcECD5OL/WvihropE8YNctCztxJ7+NN3VejN70jfqGb+bAj4cfD4BFu0OJs82Yg==@kvack.org X-Gm-Message-State: AOJu0YwI+g5uTK7Xb6ucUawwR7hNsfYeZOQs3/x7cmt+47zCz0pB+kom 7K7HcAAHAI30TQnRhHqJMtm+DhsFv19yf4/VtGkVCzPoQLq7MLBCihMkhNac68YrMkOFsCQ6PK4 P+Q== X-Google-Smtp-Source: AGHT+IEQSBazEnMEVuc7MWzbILvMQe6z9E5ialwdpHDO4Dnoni0JWQpSkI+iskP8PjwM5g+e4yexHOB3pPo= X-Received: from pgbcv10.prod.google.com ([2002:a05:6a02:420a:b0:7fe:5385:5c99]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:dac8:b0:215:773a:c168 with SMTP id d9443c01a7336-21a83f48cf9mr212973855ad.1.1736569574261; Fri, 10 Jan 2025 20:26:14 -0800 (PST) Date: Fri, 10 Jan 2025 20:25:50 -0800 In-Reply-To: <20250111042604.3230628-1-surenb@google.com> Mime-Version: 1.0 References: <20250111042604.3230628-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250111042604.3230628-4-surenb@google.com> Subject: [PATCH v9 03/17] mm: mark vma as detached until it's added into vma tree From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, david.laight.linux@gmail.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com, "Liam R. Howlett" X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 9858714000D X-Stat-Signature: yormidto81ied6qoibpiza1moqi5d8xq X-Rspam-User: X-HE-Tag: 1736569575-822085 X-HE-Meta: U2FsdGVkX18Ro6LeXnQGP5DCmfjaatSLcrpI9g8WXl7zZ0mtwCII1R7vRCEk7pkmOUqAUQ6wLn1zAIFb5cwJCwN0N/w0qIV+/rEUyaKdnv7rnQs4+23NlZCcjbRnN3LusP6U5ROpNqYGw3alWEI1jcTPglVxNThx1jSih7ZVsgUpElJ7BX0iXDu4hYGECuoqppwwApHq6jJsYdCKOouvXl1cO335l/osQjuM98T7DPU6vSurpGme8kUOR8bijgA6nBLebX0Bvhu/Csq4wVrkhNeOLXXCMqrMnJVgrxbly0W+4+7gJ3X7cI0opys+S/SZnC7KvoywNVHqNc/7kbBD7BW42PounC3TkdUZmOHiCjfRRKAEM3MC48hDoTe8lt2h2/998cNX/17Z/oH+VjyJpgqeA+PaGz71Ey9YnDmdqMAFxv1cHbjt46pwZ5dOXjtrqyoIu6FETeR8366gxs6yAM5kGDf/PgGCflEaJVBB/HnSbS85ikzZZDD9RIH97wLtQsc3oTIS0FT1D7aO7YeAFb42KimVhU9sAb5ib3WRBp/ConftTIqeA4iEsLVqAJnPmsSAkubTow1cX/nraw0VY+2cFKjVe0DHUoeU4tUcwySm2xZGSYz4QX7/z4ngrAfJrejMCeHJYRMoZCWrLeLw+ZkWin8W26N3wVAn48ERqGwYWrDha2vpeN1wDohX6gf3pFO/b/DEj/tLGFEgAbqDoXgGvrcPCXLLZFkVUnKiFknxnp/ALdiBkDbmP9J+W4h0o0guM1YY6AbWMLp2yEYvDT/LzYbf7emB1CqKgAyjNRAqx6Y5ixBWxxTd5f8jfXdiYiIjPoTaa5012irv5RMacanUi5ZDF0nnO3g1LAcR+cAivq3MZ7FRbtLPGuyRSuVR/PuI0lZHkozptdipWC0sUvIbuOKkJtH1ZttoUOJTTdQVc7pnuWerH/0WnloR/CaIDU2/r4TU+t/6KvI7vZU HYKf0jKO lRkjsltYc2jhebe/mteG788YcoQrToIHc9XsM+4DXrPScL+poD5rBq2L3yJDegG0wdYFF+BnIMDt3rFQk6go4n2cLQM1/pqaHxMKacHyuIs/pYnzlGp0wz4mh51IvXc5WlQ8mq+i/odMOlRD63cLi34fVTuIn5Cx/gHnw0eRqLqeJSlEfiPYKP/es8MZTvWwhRij6kJ90ELGK7ZIj9OAeNtz44/mAzCdN1LFg81BjojnpOIeWl1ObdgBD2TUs+rhazZZvf0pPb3qy6CgTQrRtlt8WRb4szEnwXzvwQ8xYPazSZvBuCleiJ1Ow/a4jC+ZiGEX3BEF5G2tt6AYRe/i30sKd3HhOs0LdPKp79tqJ0CwyFvzF4X1qhxAQOAOUyPfJTEo+v5Sa8zQz/0FY7Rgde0/CyhAs6fdvWQnOe0PxUgLGu+Igx/ypUyGINZIYtJ4gX3LEZm/xkhE4KIeAUiAdSn1pRQlVDKG1meh/mmAvHpPpJ0AfrBDs7gEalV/ywmut8e+V5eNEos0c5TrRFzlhdAtta3XXZ7NePhF53FoxdJFXCqycWLGGaEdXnS7GCiDfsI6xYSeUbm+J7bJDGdnuvERtspjST7q54BhSCb4xEsLWaNht4qD4JnBLKaTT17KXIS03kbaD8Xdjacs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Current implementation does not set detached flag when a VMA is first allocated. This does not represent the real state of the VMA, which is detached until it is added into mm's VMA tree. Fix this by marking new VMAs as detached and resetting detached flag only after VMA is added into a tree. Introduce vma_mark_attached() to make the API more readable and to simplify possible future cleanup when vma->vm_mm might be used to indicate detached vma and vma_mark_attached() will need an additional mm parameter. Signed-off-by: Suren Baghdasaryan Reviewed-by: Shakeel Butt Reviewed-by: Lorenzo Stoakes Reviewed-by: Vlastimil Babka Reviewed-by: Liam R. Howlett --- include/linux/mm.h | 27 ++++++++++++++++++++------- kernel/fork.c | 4 ++++ mm/memory.c | 2 +- mm/vma.c | 6 +++--- mm/vma.h | 2 ++ tools/testing/vma/vma_internal.h | 17 ++++++++++++----- 6 files changed, 42 insertions(+), 16 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ed739406b0a7..2b322871da87 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -821,12 +821,21 @@ static inline void vma_assert_locked(struct vm_area_struct *vma) vma_assert_write_locked(vma); } -static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached) +static inline void vma_mark_attached(struct vm_area_struct *vma) +{ + vma->detached = false; +} + +static inline void vma_mark_detached(struct vm_area_struct *vma) { /* When detaching vma should be write-locked */ - if (detached) - vma_assert_write_locked(vma); - vma->detached = detached; + vma_assert_write_locked(vma); + vma->detached = true; +} + +static inline bool is_vma_detached(struct vm_area_struct *vma) +{ + return vma->detached; } static inline void release_fault_lock(struct vm_fault *vmf) @@ -857,8 +866,8 @@ static inline void vma_end_read(struct vm_area_struct *vma) {} static inline void vma_start_write(struct vm_area_struct *vma) {} static inline void vma_assert_write_locked(struct vm_area_struct *vma) { mmap_assert_write_locked(vma->vm_mm); } -static inline void vma_mark_detached(struct vm_area_struct *vma, - bool detached) {} +static inline void vma_mark_attached(struct vm_area_struct *vma) {} +static inline void vma_mark_detached(struct vm_area_struct *vma) {} static inline struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, unsigned long address) @@ -891,7 +900,10 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_mm = mm; vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); - vma_mark_detached(vma, false); +#ifdef CONFIG_PER_VMA_LOCK + /* vma is not locked, can't use vma_mark_detached() */ + vma->detached = true; +#endif vma_numab_state_init(vma); vma_lock_init(vma); } @@ -1086,6 +1098,7 @@ static inline int vma_iter_bulk_store(struct vma_iterator *vmi, if (unlikely(mas_is_err(&vmi->mas))) return -ENOMEM; + vma_mark_attached(vma); return 0; } diff --git a/kernel/fork.c b/kernel/fork.c index 40a8e615499f..f2f9e7b427ad 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -465,6 +465,10 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) data_race(memcpy(new, orig, sizeof(*new))); vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); +#ifdef CONFIG_PER_VMA_LOCK + /* vma is not locked, can't use vma_mark_detached() */ + new->detached = true; +#endif vma_numab_state_init(new); dup_anon_vma_name(orig, new); diff --git a/mm/memory.c b/mm/memory.c index 2a20e3810534..d0dee2282325 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6349,7 +6349,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, goto inval; /* Check if the VMA got isolated after we found it */ - if (vma->detached) { + if (is_vma_detached(vma)) { vma_end_read(vma); count_vm_vma_lock_event(VMA_LOCK_MISS); /* The area was replaced with another one */ diff --git a/mm/vma.c b/mm/vma.c index af1d549b179c..d603494e69d7 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -327,7 +327,7 @@ static void vma_complete(struct vma_prepare *vp, struct vma_iterator *vmi, if (vp->remove) { again: - vma_mark_detached(vp->remove, true); + vma_mark_detached(vp->remove); if (vp->file) { uprobe_munmap(vp->remove, vp->remove->vm_start, vp->remove->vm_end); @@ -1221,7 +1221,7 @@ static void reattach_vmas(struct ma_state *mas_detach) mas_set(mas_detach, 0); mas_for_each(mas_detach, vma, ULONG_MAX) - vma_mark_detached(vma, false); + vma_mark_attached(vma); __mt_destroy(mas_detach->tree); } @@ -1296,7 +1296,7 @@ static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms, if (error) goto munmap_gather_failed; - vma_mark_detached(next, true); + vma_mark_detached(next); nrpages = vma_pages(next); vms->nr_pages += nrpages; diff --git a/mm/vma.h b/mm/vma.h index a2e8710b8c47..2a2668de8d2c 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -157,6 +157,7 @@ static inline int vma_iter_store_gfp(struct vma_iterator *vmi, if (unlikely(mas_is_err(&vmi->mas))) return -ENOMEM; + vma_mark_attached(vma); return 0; } @@ -389,6 +390,7 @@ static inline void vma_iter_store(struct vma_iterator *vmi, __mas_set_range(&vmi->mas, vma->vm_start, vma->vm_end - 1); mas_store_prealloc(&vmi->mas, vma); + vma_mark_attached(vma); } static inline unsigned long vma_iter_addr(struct vma_iterator *vmi) diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index 96aeb28c81f9..47c8b03ffbbd 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -469,13 +469,17 @@ static inline void vma_lock_init(struct vm_area_struct *vma) vma->vm_lock_seq = UINT_MAX; } +static inline void vma_mark_attached(struct vm_area_struct *vma) +{ + vma->detached = false; +} + static inline void vma_assert_write_locked(struct vm_area_struct *); -static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached) +static inline void vma_mark_detached(struct vm_area_struct *vma) { /* When detaching vma should be write-locked */ - if (detached) - vma_assert_write_locked(vma); - vma->detached = detached; + vma_assert_write_locked(vma); + vma->detached = true; } extern const struct vm_operations_struct vma_dummy_vm_ops; @@ -488,7 +492,8 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_mm = mm; vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); - vma_mark_detached(vma, false); + /* vma is not locked, can't use vma_mark_detached() */ + vma->detached = true; vma_lock_init(vma); } @@ -514,6 +519,8 @@ static inline struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) memcpy(new, orig, sizeof(*new)); vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); + /* vma is not locked, can't use vma_mark_detached() */ + new->detached = true; return new; } From patchwork Sat Jan 11 04:25:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13935770 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 650CDE7719A for ; Sat, 11 Jan 2025 04:26:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2F9606B0092; Fri, 10 Jan 2025 23:26:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2815D6B0093; Fri, 10 Jan 2025 23:26:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0D2436B0095; Fri, 10 Jan 2025 23:26:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id DB47F6B0092 for ; Fri, 10 Jan 2025 23:26:19 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 9E397A03DF for ; Sat, 11 Jan 2025 04:26:19 +0000 (UTC) X-FDA: 82993884078.03.B03A1CC Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf26.hostedemail.com (Postfix) with ESMTP id D0DA314000A for ; Sat, 11 Jan 2025 04:26:17 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=azIXF1bw; spf=pass (imf26.hostedemail.com: domain of 36PKBZwYKCBkHJG3C05DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=36PKBZwYKCBkHJG3C05DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736569577; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ADhTMbqXYt1v8ykOXAPhOs0iBeUY/Z6oTrjn3gyIBaM=; b=zXS66hkDTVSbEmXvFfH647BHFITOHVOehFe2hIXcPUpeRnBjvIY8NwKHEU7HYu90tvVJHf 7JMpcy/gUAv5aRGtA4Ygv0j4kWEzu+MJO9CwSlp5uHFdjCb8FwFRjRQaRsXY2HQn0PGzWW fJf+A64fmQmJN+aQs7l/hA45GCJQD5s= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736569577; a=rsa-sha256; cv=none; b=FjQsxXgCzLJlB+725PaqvXoxeAmdu+s+q/QprsbPHLNeIn1kkMBwonvEKQWLwy9+/X1FdF y/uiiacJJPfBRz4AtpDO7v/fYbgADTv516SvQjZAp3rgbcJEsvXW/8Z0a/dyz1MmVINQP7 7ikLDKOOycfJSL8fD5w4DR11yFaP8vo= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=azIXF1bw; spf=pass (imf26.hostedemail.com: domain of 36PKBZwYKCBkHJG3C05DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=36PKBZwYKCBkHJG3C05DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ef7fbd99a6so4773800a91.1 for ; Fri, 10 Jan 2025 20:26:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736569576; x=1737174376; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ADhTMbqXYt1v8ykOXAPhOs0iBeUY/Z6oTrjn3gyIBaM=; b=azIXF1bwp03Azo39Pz8lKfJ8vqXyXfs+9YtUyIBwVKoEla9TMAzVTKJAeV7/ul83A2 +luWTJfdxgpmbkbggo0vWRspgD5JcNpZJ7+geVkaPJl6iyhjPTJD28YLJ9JFcPkCgrrD mzn6MeMVQnN0l8Q6xNZp7HMSWkxFahNOfT4OplbTOWLE+w35gTeK17rdDyHFRhPnMHDj ecoVp6rHvXQi/uDMbmlrdniDVqCUb51U2ZpbBEY5PNrbeRU6oBCroBnO/lfEgTNFlglK WkykAO9eXVFLAmi5MxQng2kqF7ZnbhYTsSC7rT5d9aahxPyjXXxyTplp89kCQgPXZzzP 2l6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736569576; x=1737174376; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ADhTMbqXYt1v8ykOXAPhOs0iBeUY/Z6oTrjn3gyIBaM=; b=ASLcKcJPjDR8pn/YxbvuvpSnL5Hu7YYIoRYSKtbrsRdBA/XJaaYx/bq+hNCcjajj1G 9x5TqX/1BWt/f5wCkZZrJFykIifGHYbs9CXfYi7ZRzn8P8G8zB4p1bkPM/vTvuO1EBQE DZc+aQRI/GWp6giMIZvNjFCEhmBHOwKOv8cszdBjz6HVRsog9MyqiNvLwNQvxFXSCdGx GA5s0PmqVXJH3Xwz0TOIJvPk2oTs2g7syBlSzxWriaDWfWFrjjNRe64BfjobpCM9zMS1 WphkGoA0uq1inWl9rCIZcsr3y6d3+eBK4jY5CjaUmZ6Ji11rxd2ixiVooajHbRBFhxdc N83Q== X-Forwarded-Encrypted: i=1; AJvYcCX8O0HUeP4qV/ovAT3gnCUUXAhwEWUQhlKRAeZ21IkXmdIa/yc+JwHvf1+E134nAerYqk/5xvt3tQ==@kvack.org X-Gm-Message-State: AOJu0YwHKrkb1YPwnMWAg9UNwfS1w8XchDfjXc9FknPcvS7s2+SjqBYv CNyJXiQYtohX9h09spyJzP8RURRKul/a/EEPduW0YSjuIMwzLoeZQVqMXqjscIV0SrV3VG4/O67 Sdg== X-Google-Smtp-Source: AGHT+IFORDPp3F2kDYS1t6EeTCyG81wOk/69Q07YfVbBJQQfUoW8CkvQ1ejcetFlgpXNkE764lNprKA+Q/U= X-Received: from pfbfd28.prod.google.com ([2002:a05:6a00:2e9c:b0:727:3b66:ace]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:3287:b0:725:456e:76e with SMTP id d2e1a72fcca58-72d21fb1d3fmr18712375b3a.6.1736569576461; Fri, 10 Jan 2025 20:26:16 -0800 (PST) Date: Fri, 10 Jan 2025 20:25:51 -0800 In-Reply-To: <20250111042604.3230628-1-surenb@google.com> Mime-Version: 1.0 References: <20250111042604.3230628-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250111042604.3230628-5-surenb@google.com> Subject: [PATCH v9 04/17] mm: introduce vma_iter_store_attached() to use with attached vmas From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, david.laight.linux@gmail.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: D0DA314000A X-Stat-Signature: 7idewtbae5ebin7qd87ntw5mc5fzext7 X-Rspam-User: X-HE-Tag: 1736569577-344435 X-HE-Meta: U2FsdGVkX18m1Zpd9VjbYP14heIAVGXodoBGBMDXyu5qpBORb4CdLFRp4z2p4M0TnzPREr+UG+fqxuMfybqhq7Jgx5BmUrpbtm6kEscSoJM7SYdP3G2TcaP5dcfiM6ksB2+T/QuFnXCy6DDl3QAPMSQBpxiU8sFfuAKZsZv+nbZodud/5l64y1CJjhNtIHXMu45ti77cMGEjX6lwBuH+9YCB4PlNBp9tXnPOydMIvvhRYHuweYnu5oLaLKVvxfbZ+c3v1BBEudn9Q29ChOMQxodSWD/QqAliwrzOKYJNL5lcyWcFLhiITQHny+IbXsF7FsRODSbuC7nmkk2ZbIV2M3vfiMHPb95H/zGYZxErB41DFJc2PGjgJCSE7TPaXBJF5rD/yzcyAejcs2X3rizqLd+kDrxw2wnB96htwe9/euGT11DTeWzOtUU18D4IhdDJOLpc5ne3p/OqZP7q1+BFqzXi8mPv/4h3V2yNQw4G/grFX+gRX9tE92ZBVnlkeCtkulw9xfwR2ntXxklcAwn1Oos+/4FxoHU2PtP1B8TFn//8lwhBMtv0P+/ITraVFf3lVO5ntqoLR/Q8ttHNleES1VM1nj+Y1TjYSJqQ54snzlsrDlD9jyVvglTkkhOAhulippuXSkQx8iMwaZF7Ifu8jgrnv43TUt1RXCOA6Qg/c7W2iraLq0IsySh4yhjbJbGjpwInWO1uY2NA5VuLeP8MWmvfNQQly01gPljjJkJRQM286OuBT+bDZmdhoZ5n8bS1XgyEcKOs4+B/nlTosmr4/6bDoPtIiIkqH6vs5miDNpNCe6IH26ml1d9xyIJi/8t8+6E+WEPh3GSO0GI+nfgLqpKFAD133RKUjnYP2HFZnNVrNCo/OaHIJFLYbNeHd5LgAPBU3Bg2TGgYKv7SE5TPRp4cWW7cbHGh4leSbZAqadY0jdXoEzPXIU3dLNHBnODdKwxfvueyTIVl9s8JxMC CWwaY7L2 j7Ah6C5uZ/xGbii2sjhFytxEof3s9KcOZCD5yd/loudcE6+cTJ6boTXBz4Ccc8YHxckk9HKdN4ZYsfqzxpaxI09LZlQehiO/9KMXhj+6k/i0hy6jMVdfLSBR193Eluj4GuNkwddFGLdK5Q7yb6BghQd/OE8JHZ58zudCGZ51ozG2kD6J5ZBejPcwBESt+C8fv5YpgLekhJ8ciLpo++wQPUcIRoZXOqba4TPVamyBXVgrUUAKwCDsh/wYp1+QHRHknKjZVb8yq0NuM9576gx4RfoEFLtq9+HwCIRSBz0mlmJHMK4X4HOQv5eJ7j+BKNUyVF0554fqyutAo/SKeQnVdTAmslG5JWnvRTUQWJZXb5lCKkyzNACL7rTsshRSROvjabtWdXW26F/ElLhFOUulsCrTUSA1Truc6lyoo2KSDlWOln5lBtv3ubONnRFlwsgYlaJK48tdsmrALTPCjtMW9j4zPtvjSKYhE0JEFTOYCyj5UbCLMT4qhJpYb2Vzt68qMxbtHGCIH0Porl/59LqiGUzO0pOgWmSBZcnr00u8zXIT/PuHhUrVGWMR8rrbti087HoXfYZB/r/lLFPz9Jw9JUPjAzpfMeJSQB6AVj+zLQfVBUbSHd0r0BmdS4tKnv0St4K8pQ1eAj4HIJ6H3Ti6JSCgwTkuauePDe2MQ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: vma_iter_store() functions can be used both when adding a new vma and when updating an existing one. However for existing ones we do not need to mark them attached as they are already marked that way. Introduce vma_iter_store_attached() to be used with already attached vmas. Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 12 ++++++++++++ mm/vma.c | 8 ++++---- mm/vma.h | 11 +++++++++-- 3 files changed, 25 insertions(+), 6 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 2b322871da87..2f805f1a0176 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -821,6 +821,16 @@ static inline void vma_assert_locked(struct vm_area_struct *vma) vma_assert_write_locked(vma); } +static inline void vma_assert_attached(struct vm_area_struct *vma) +{ + VM_BUG_ON_VMA(vma->detached, vma); +} + +static inline void vma_assert_detached(struct vm_area_struct *vma) +{ + VM_BUG_ON_VMA(!vma->detached, vma); +} + static inline void vma_mark_attached(struct vm_area_struct *vma) { vma->detached = false; @@ -866,6 +876,8 @@ static inline void vma_end_read(struct vm_area_struct *vma) {} static inline void vma_start_write(struct vm_area_struct *vma) {} static inline void vma_assert_write_locked(struct vm_area_struct *vma) { mmap_assert_write_locked(vma->vm_mm); } +static inline void vma_assert_attached(struct vm_area_struct *vma) {} +static inline void vma_assert_detached(struct vm_area_struct *vma) {} static inline void vma_mark_attached(struct vm_area_struct *vma) {} static inline void vma_mark_detached(struct vm_area_struct *vma) {} diff --git a/mm/vma.c b/mm/vma.c index d603494e69d7..b9cf552e120c 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -660,14 +660,14 @@ static int commit_merge(struct vma_merge_struct *vmg, vma_set_range(vmg->vma, vmg->start, vmg->end, vmg->pgoff); if (expanded) - vma_iter_store(vmg->vmi, vmg->vma); + vma_iter_store_attached(vmg->vmi, vmg->vma); if (adj_start) { adjust->vm_start += adj_start; adjust->vm_pgoff += PHYS_PFN(adj_start); if (adj_start < 0) { WARN_ON(expanded); - vma_iter_store(vmg->vmi, adjust); + vma_iter_store_attached(vmg->vmi, adjust); } } @@ -2845,7 +2845,7 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address) anon_vma_interval_tree_pre_update_vma(vma); vma->vm_end = address; /* Overwrite old entry in mtree. */ - vma_iter_store(&vmi, vma); + vma_iter_store_attached(&vmi, vma); anon_vma_interval_tree_post_update_vma(vma); perf_event_mmap(vma); @@ -2925,7 +2925,7 @@ int expand_downwards(struct vm_area_struct *vma, unsigned long address) vma->vm_start = address; vma->vm_pgoff -= grow; /* Overwrite old entry in mtree. */ - vma_iter_store(&vmi, vma); + vma_iter_store_attached(&vmi, vma); anon_vma_interval_tree_post_update_vma(vma); perf_event_mmap(vma); diff --git a/mm/vma.h b/mm/vma.h index 2a2668de8d2c..63dd38d5230c 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -365,9 +365,10 @@ static inline struct vm_area_struct *vma_iter_load(struct vma_iterator *vmi) } /* Store a VMA with preallocated memory */ -static inline void vma_iter_store(struct vma_iterator *vmi, - struct vm_area_struct *vma) +static inline void vma_iter_store_attached(struct vma_iterator *vmi, + struct vm_area_struct *vma) { + vma_assert_attached(vma); #if defined(CONFIG_DEBUG_VM_MAPLE_TREE) if (MAS_WARN_ON(&vmi->mas, vmi->mas.status != ma_start && @@ -390,7 +391,13 @@ static inline void vma_iter_store(struct vma_iterator *vmi, __mas_set_range(&vmi->mas, vma->vm_start, vma->vm_end - 1); mas_store_prealloc(&vmi->mas, vma); +} + +static inline void vma_iter_store(struct vma_iterator *vmi, + struct vm_area_struct *vma) +{ vma_mark_attached(vma); + vma_iter_store_attached(vmi, vma); } static inline unsigned long vma_iter_addr(struct vma_iterator *vmi) From patchwork Sat Jan 11 04:25:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13935771 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E367E7719D for ; Sat, 11 Jan 2025 04:26:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 763D76B0093; Fri, 10 Jan 2025 23:26:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6ECEF6B0096; Fri, 10 Jan 2025 23:26:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F1C96B0098; Fri, 10 Jan 2025 23:26:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 2A6576B0093 for ; Fri, 10 Jan 2025 23:26:22 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 905361C697F for ; Sat, 11 Jan 2025 04:26:21 +0000 (UTC) X-FDA: 82993884162.28.D81A6E4 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf20.hostedemail.com (Postfix) with ESMTP id D099F1C0007 for ; Sat, 11 Jan 2025 04:26:19 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=lGHJPSek; spf=pass (imf20.hostedemail.com: domain of 36vKBZwYKCBsJLI5E27FF7C5.3FDC9ELO-DDBM13B.FI7@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=36vKBZwYKCBsJLI5E27FF7C5.3FDC9ELO-DDBM13B.FI7@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736569579; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QrBwNMjEBhpbcxWqFk88nlV6Pfhf+HDin7YMebARszw=; b=DOG/H7ntoUGENirNHb9FT3KOSm+fCVqVeSZvHaxSDJc7xa2tMZqDqF/dMytIISF3afMC+T 4oP3jummRTALlKH1oYrG1sq+RFITTZwR43EH3U/uwrs1GUU7RXV+qASjvUuDB1/YKwY/QN B28AtxwNkJPI6fmLWDT+q8JyB+0n4Sk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736569579; a=rsa-sha256; cv=none; b=Lbq7ak2Vcv0X23LjqRNmNitWvkgvJ0hvuKoL5e4ysdEV0GoVYH+HROR6gMIh5q0g+3e66G AQ2dPwE9yCP8AwmMxT8bVhe8JN7hA72vXz1rRSR5WH9/tQngk9cv5PkXj6kvCcRYNDIp4K r2aiE44Ov2el+ouuC34bUIy+Js98Ma4= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=lGHJPSek; spf=pass (imf20.hostedemail.com: domain of 36vKBZwYKCBsJLI5E27FF7C5.3FDC9ELO-DDBM13B.FI7@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=36vKBZwYKCBsJLI5E27FF7C5.3FDC9ELO-DDBM13B.FI7@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2161d5b3eb5so47488235ad.3 for ; Fri, 10 Jan 2025 20:26:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736569579; x=1737174379; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QrBwNMjEBhpbcxWqFk88nlV6Pfhf+HDin7YMebARszw=; b=lGHJPSekaEgY+xsifnv9HeI24Odq1LZE4S+2gYYeWG5LBgoJzgn8xEr5QVF3qT1nLC 6y8zAXBBeMneIgaNt94LsARWQlydtMu3TkrOzZXTJy6aDE/zD4Z9Aay6QTEvo+Qj2xfe JdqUNE6ur5gLWzhvMXdIwaEKbj541B4tQA+LUJoYDccEuadZ2AFShuKixATVvvmqj7Wt /Brljhd9baiCSakvAw3U5kXBmLQ4ikThUfiGV3Sp37a8ahVEJ1arm+IsUCOl0ZVWgXIN cRRgh//QOfMm8crdxOO90eV++Slzf40i+CROFDwLJLEeVbhss8GusaLFuo4yfVuoZxOQ 9WsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736569579; x=1737174379; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QrBwNMjEBhpbcxWqFk88nlV6Pfhf+HDin7YMebARszw=; b=b5ztiZUlscPjBSpXFIyCOF3haVyp6cfUiCFumdaSfO1Eq7lALp0BH9wVvGyRbYkk8v OGfgcDZkglB/NJ5akh0H/6BLLeGDLamyS294oE+FQCJBVuE5gWXrtRnSu1cAKwPOvJPD JZZXV+E0QxyIT1xA5xuwRxyzGVWmd7fuIBxfs/aglOlLv+dY5ylIwm5vs53l9Q6oEDRv dI/V+3GhHWzV/9gf3eJE8ZZOGFkxlSVehX0qeAlje/xxFmedbYDdu65fFotZdZsYFb93 xraV06haS+3Wt6k/+PhnrMjqXwtVsAJsXjAJI5b6EtT+I2sx0ghOuvzb+6B9BKYloc5o I5SQ== X-Forwarded-Encrypted: i=1; AJvYcCUYaxyjpGB80OJNbexmki0JZUYWhAIn4JFOWievGitKBwcumVCRmVdhX0U5gpOCvUNZ++8m4LKalg==@kvack.org X-Gm-Message-State: AOJu0Yzt+bhWNlnkmrthqbm7IGh706okzkeejHfiD8LLZcMzXzOBmNU2 e0pkW2V6jnDZWnVFG64ioTzAtkarDnB311ZiZquzbiQeOcimQX9ZaV0vlY4pyVqSEZOr+OSw8Y8 vZQ== X-Google-Smtp-Source: AGHT+IHnRj/SaQAEPl0ebQpZxgzIjgJT++Eb5qi3sriuXSku4L0jGeJRI8Hfnaea64zScjmdjPJijWAPl6E= X-Received: from plsd17.prod.google.com ([2002:a17:902:b711:b0:211:f320:a598]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:f64f:b0:215:5240:bb3d with SMTP id d9443c01a7336-21a83fe4915mr197586285ad.42.1736569578687; Fri, 10 Jan 2025 20:26:18 -0800 (PST) Date: Fri, 10 Jan 2025 20:25:52 -0800 In-Reply-To: <20250111042604.3230628-1-surenb@google.com> Mime-Version: 1.0 References: <20250111042604.3230628-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250111042604.3230628-6-surenb@google.com> Subject: [PATCH v9 05/17] mm: mark vmas detached upon exit From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, david.laight.linux@gmail.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: D099F1C0007 X-Stat-Signature: 8prucasig9zz17535yjku7gfcgf5guuh X-Rspam-User: X-HE-Tag: 1736569579-643887 X-HE-Meta: U2FsdGVkX1+jbfYPgjjb4NzChgaKv1z1K2YFI3q4wOr0GHt5qn9ZmyoSSbKmDIY5OsNiXsSpoZUKk95VYaa1j0F7w+zYds2XqvzuUbMBncCwXXbOdW+e20ynEwYpeiibre5JGpubc5PqbHCoIDf1iuphY878QWYhTp1wdZEQE+2/aGjhUGZRmmUr1gbqPtkq/7ga0m6mAZ8V6VMrHCohVVUFB1BQuaJeJY7Ta7N6LnAY0by1Y46RtMzoBjfycEn2T98VOQbT8kyof4C208yqGWkzahEXUK+RDi7K9uxKsJgH9kA2hL76kbopc/HZO1C4EwybXlRGThTXythgqvwHIs8ow4HzYLNsSiCNAFIcaW/4djM/ZP8APp1BRHaXzlQGqCpH1zQAcIFSDmgZrfRptli7dGX+pKq+6PrgyGrYjcOfhrE0Ad/uUInIcKr/pe8OcliLnj1da5MEbLF4gqiEIf6YgIsAOMKSxKhIe6is7B90Kh0Gs1GoiaDN2WPOIux/nQI/KcFNJ6ELRGOO/L0FFHUzODJDCg4d9X/Zjvozp9vUxyur5gqDH7tKDds8yIhr+daZ9NB3bCy5WDRcCUbF4Az5S8mptG6+ly4zFhvOrtWGsYTmWp+6EcGfregE2DJqrCzTczPKjSM5ntsk780kuQWBjPYLqIEU8nJuKW6QoSrrgBYTIj4sQ3B/FQ2cbuc0WyRDvDu1SwQXXJr3waBYRNHAuCq9QnDKDtZLpkzgYuwXyHmOUv/Fcb8fGyvv/b30owu2GvABFfmitevuZ7z8LjAhjs15AVu0ov18p9vAARSV2vw43JTYYCgDF4JXVIej025utfp/oXwFMY4tBYeRxOgtOBoOXZtF6wKpOObSK4+rdTkLaNrgOcyzFuC2xmnligI3DTcFcVDCd2vpign3rxyWnZmLnfowVxLQa8Bft8kBTuIuMuWpD0ggZSldgybM1w/V53IYw0di/ZzqlUb TZiDXnyc FeiWbtq9v4Bqy8h4Ly64dM0GYwS5KV1Su/Ooj0VoRuWYaoZ+8afvzlmURYxKvti1itLv39gx4FNTKrygfYvr/QjpTtLHJhvC50gS1YXfITcHGtZuqTk8vwS78KTcTx4LGWGwrJPv26IG6jhlaNP3VlgpSfJahrXyLl98Xo5vVeDc1RP23gNn7t45NnxUE73u+Vr/1XWAFS6l+rJkxOv8UXkIr6p3VZuJmkcujcuKYZJpTesE1mZyi4D0Gdem5Su+LyRnsnYQeLQfalNZzHh8C2gjFzZcENI7L/ehHf3m/TymInSMuXc5ajx15oJ/18oyuIwY3+kResfCVRq5OHELyGDT7hD3Zesn9NEJxBHF4+EGi9YaZ/ayFw/22lJ0up9ZVa2AptHepsl7BHmtrwl+6aWrmK/bGII5vVQRME2Ba1z2Nxd5he+47VjErq/6R2G6nB9/b0TuMkVv2jiLqQ1BpP9ZkOZf6GjTirywSKSROUaBGJGOiT+lD+Z/eVgtt4gu58SV5jeYa1FfRnr1e3rJK8GJ69V3ZNxKwohWE9FgMtaA3yyH4OKFxxuFjE4sz5IHA1iSOmU6vNsrRQWzrwhRRJF9OuNuYg7/kl7FXRp2XGnwV9JE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When exit_mmap() removes vmas belonging to an exiting task, it does not mark them as detached since they can't be reached by other tasks and they will be freed shortly. Once we introduce vma reuse, all vmas will have to be in detached state before they are freed to ensure vma when reused is in a consistent state. Add missing vma_mark_detached() before freeing the vma. Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- mm/vma.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/mm/vma.c b/mm/vma.c index b9cf552e120c..93ff42ac2002 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -413,10 +413,12 @@ void remove_vma(struct vm_area_struct *vma, bool unreachable) if (vma->vm_file) fput(vma->vm_file); mpol_put(vma_policy(vma)); - if (unreachable) + if (unreachable) { + vma_mark_detached(vma); __vm_area_free(vma); - else + } else { vm_area_free(vma); + } } /* From patchwork Sat Jan 11 04:25:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13935772 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9B67E7719C for ; Sat, 11 Jan 2025 04:26:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7DA116B0098; Fri, 10 Jan 2025 23:26:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 73D726B0099; Fri, 10 Jan 2025 23:26:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A3706B009A; Fri, 10 Jan 2025 23:26:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 225DC6B0098 for ; Fri, 10 Jan 2025 23:26:24 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id DA88FC1081 for ; Sat, 11 Jan 2025 04:26:23 +0000 (UTC) X-FDA: 82993884246.01.3E24E55 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf01.hostedemail.com (Postfix) with ESMTP id 1C69340003 for ; Sat, 11 Jan 2025 04:26:21 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=AUADu51b; spf=pass (imf01.hostedemail.com: domain of 37PKBZwYKCB0LNK7G49HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=37PKBZwYKCB0LNK7G49HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736569582; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cA6SN2b+VepEpqi6xeamEUXAYwuTlZjCipcGUs+0HQY=; b=SbJVE0M9LKkUTNkVCHm6/BBn09toAIcalEa/legv1OKB2RZgVUqAELwKDVOmXsuCd7cWmW 2+VGBsI+BkUDYkRVH+v8pkq8guLW54xZ2aumpUKkajRwqhZjoBlUzTYmw6skfcI3aKpPHZ Cd14wvcr02LbJ/xFv8YXVsR+mKQoQX8= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=AUADu51b; spf=pass (imf01.hostedemail.com: domain of 37PKBZwYKCB0LNK7G49HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=37PKBZwYKCB0LNK7G49HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736569582; a=rsa-sha256; cv=none; b=jGPJS9RTcpA7p5a7pnX/tmxT9RcnAYuXK7D0IXbsrv20vwcdeyz1tB58qIzjZdKi4pZBN5 GRuInkNKtUXnJ536XOicDcnd6Fp+CsugsYg8Ixpx4VGwHHFo2Gz5Jzk8R1mambO6kZ2QrQ cpefkat1XyvBDgkGPt6D6ViQOPlsZSM= Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-21640607349so58338555ad.0 for ; Fri, 10 Jan 2025 20:26:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736569581; x=1737174381; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cA6SN2b+VepEpqi6xeamEUXAYwuTlZjCipcGUs+0HQY=; b=AUADu51bBSscLpFhx9CAHCt8dfRsqG2/hHXpoDDmBFw+rF0u/LJam4L1hbAuf4VRp7 zy34moM6nWr9IpzU96xTgo9bLsM517oPAqOtpMIjH5vPCkK5m3fcogEYPIr6G5rSUsxX DOecFwAPVyY+wrwFrqXXyHVMooalCN6dQicyqNs7mrh8uCqLT6pne+WVrfaEeJ4tJwBo Ms7aI9HYO6+xx2YM8mIZUH22baMIbQ07njywc+Xc2U+AeD79qD8s+jpoPhLH9tv9Zox4 X3lxbg3O96wRp1joiiqn5zLumh8A2DtqcluD9ajSi/CC9S2IYGZiZt79k8NYrhqPsMdi AnuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736569581; x=1737174381; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cA6SN2b+VepEpqi6xeamEUXAYwuTlZjCipcGUs+0HQY=; b=wXatyv6WGzlSyjSJHBVMEZ5blR5EQw46LoN8y5Ky28dTbljXPKEqCX89HNnPek3ECM 7RHhN8fV5Jw0UgNUbceNrzocvk2aN/OVt1yk9iaN5hsfFZ5cEXTO41ZRd/AYEnO+yhss iHw1Ol2XrcP9Gebi7KWvlzW6zQkOHBjIFsFpvGxtMbS31XfR8q4r+VJhIJkIuoB0varm xMjrE0JEW8Ws5nO7Nux2tI/oyB95EJ2ZGtnw49EFvodK3wkMQgQNSEG37pfLGcXvLzvq uK4dWK+JKmWoN2ZnVNgBHaN+azxE+fLaEUO1tutsKEo8ZjpSdG0lOFyYBou7g0QZFAmi LZdw== X-Forwarded-Encrypted: i=1; AJvYcCViELemHUu0v9jTMFCsTqW6BcBIv7onlEtS9AQ6dGyY07BQ9dwYdb9asi2kmwWXCqUbosLbGeA7hA==@kvack.org X-Gm-Message-State: AOJu0YzydWtPQZnbzSWNPKROsIJwBL8cHpZ8rFbTJcYCLSLYC4lZNytH KCbcZGPZ8ZSLYVOrMQALtBMtnUJet8T84XcbNBfR7AA0IfgEDe9TDeIq6yZfPP2qk4dsWAOlQFr QTg== X-Google-Smtp-Source: AGHT+IE6ZciFE4COCcZoptuRqP9Xdk4NScDBFayFFXNBfZ/aDS6ipS+1cA/rJiTwDEgWxrdwV/9h4u1KDQw= X-Received: from pfbko22.prod.google.com ([2002:a05:6a00:4616:b0:72a:a111:742a]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:d043:b0:1e0:d4f4:5b39 with SMTP id adf61e73a8af0-1e88d1d5dfcmr24841946637.24.1736569580885; Fri, 10 Jan 2025 20:26:20 -0800 (PST) Date: Fri, 10 Jan 2025 20:25:53 -0800 In-Reply-To: <20250111042604.3230628-1-surenb@google.com> Mime-Version: 1.0 References: <20250111042604.3230628-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250111042604.3230628-7-surenb@google.com> Subject: [PATCH v9 06/17] types: move struct rcuwait into types.h From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, david.laight.linux@gmail.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com, "Liam R. Howlett" X-Rspamd-Server: rspam05 X-Stat-Signature: joh9a1xdubgtbjytng974s6a9q316wqp X-Rspamd-Queue-Id: 1C69340003 X-Rspam-User: X-HE-Tag: 1736569581-40569 X-HE-Meta: U2FsdGVkX195qusgYRkFe0XG25LY01jKHEqHarAMOO3qnrWyB0AGmZHgJeyFCGP7DhAPkmuOvHRz+WGJ832mSLiR3aMAAv14cZE2VoMrAhDCecf3wN5wUONCXTUEmHOAdEGGUUKpO9XDUq2m88oEBVM6xFDY14XN9OWugcj2hYaFVx9O6XLoOR5M3q78JFH2x/HwTTrkIEJMMCyQo64i343OywvazExi4Lp2eNpAQ32ZzaFdYZCpmC7zMdjaAk08VcoDRYhEYQT6L7sy12E1pTDz/izcKvxPDFzo7Hgv+NE5EETozlSvpsM5Yt2JCv0mpvnN+zJm7JtiLxw6ubIIpO0pyb7WPBl2JD7TETbFLEs9TFdb8hVaw4joK9MnJfP+xRfuyMjvQo/HyJmMN8GuNAYqzvF4jgSG9OsVwlpnj2dP2uFQkTB6OrJVTYO/Kq1cxmmvwsJpvXKFZXDOZU0m651jpfry9xKiCuPFJBweZ9HU8dlJA+Ukermy9QCRmSIuNVpd7doESMcPjBvdw6vFAjrKdoLTKY5XovHQNfNpyTQRgF5MRNylhlb0wK6QTU+qxdvxqIpjADeduEt1bw92E1Y+aEfFCeuwk0OguuAQLGjYvC9cqoZI8+hzqa83GxJ7zLNrMDyJntfkHRaVslnk7GdjyQXY54nfW2XCcTxiLZWYpF0lGN7TQqAj3CNWOKEaDNr2jswTBt430Dz6gM2gjkcIaubJd2UkZdp+NcQl8SWzqSqZboUCVYEZU/j36mT69cjdng3/gfo5duCWsUh1vawAiyiDwN2GTeaFuDMKH6z6fKtjiYlQMnJnXiteQYosrklsjtft/cb6ZjoKEp14rGctKkwMdWRXEl1VdhGOyzOl7h71XAtLkNyapLXsPe1yK1I5uaKeyphIJYyWfm7fjaZEwiynW4uuQmoMrBBsZ+hMvrqIGmntwTvj0eABZmxDedNgIrws1elQbFw3pEH x5zkVGNL OJdQs3gaW8u9r30SxsRVkzO3cvFbegbj32p4nZOqqFq+WEvNpx/aGPTphpfsdzw5HiydNP5KWfAB+FxDgtQMMNX8Vrpm+V9y771XKcKhyGKq02yR8HrgrEkdwAQjtxcnwbrF1L53eKBaDbuLXbT0BK9xA/67mivHAfZ/wVLz6Zw7lPpXf5ekMXpf/31BsqERaH9GH4MdxUL4QrYXhkO5o0Fdo83BDMzTIaMjGHoeb1XuOF29A2tHDdKaxsDmMZQvt6O1REE2nTm3xOx6auaQSTb6CEUBI3NH3Ad4HrTO/wWO6pElAlioisZrP1oQhIgKKJ/D+HMce6b0WWM/4U49+ATIcAYBOqdqy49BA5u+rm5rTqKYCxspiIAXLv+6cx5O9gPHQ8zjRGurTbNksb9qhPBz9wAmQ3o6febN8MQf9IIyFPorkzs5BPzeVIVVSkc/5SXNKMZ/ppMIbehLRmJZCCHV93fQPmZiawa4vK4TEM13RXNNAKOsdG9Qh3ZVYZPvfYYAxMahnn+JC39SPoRWX5357L+aZ/IXztqucQcOvoRCu1RCOwVhGhZQC8eI6Y1czzytgYp8v/6qFeXr3NkOzsMNGZqYNqDgchSn9ok6UMluqskwOH/2gMzv5RwYdj7vB5cLBWC31sJ7TdyZkRZFQWQgcGbck5kHl6Iu7fE5qry8C3yR0H7m5N4jxIAp2T5DoU6kpQbJ4TlLQtrgd7GnW3+v90JT+kUS/KuzHGxuwlFItfJwdMjnKRMnLAHBPLwTw5Ffv X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Move rcuwait struct definition into types.h so that rcuwait can be used without including rcuwait.h which includes other headers. Without this change mm_types.h can't use rcuwait due to a the following circular dependency: mm_types.h -> rcuwait.h -> signal.h -> mm_types.h Suggested-by: Matthew Wilcox Signed-off-by: Suren Baghdasaryan Acked-by: Davidlohr Bueso Acked-by: Liam R. Howlett --- include/linux/rcuwait.h | 13 +------------ include/linux/types.h | 12 ++++++++++++ 2 files changed, 13 insertions(+), 12 deletions(-) diff --git a/include/linux/rcuwait.h b/include/linux/rcuwait.h index 27343424225c..9ad134a04b41 100644 --- a/include/linux/rcuwait.h +++ b/include/linux/rcuwait.h @@ -4,18 +4,7 @@ #include #include - -/* - * rcuwait provides a way of blocking and waking up a single - * task in an rcu-safe manner. - * - * The only time @task is non-nil is when a user is blocked (or - * checking if it needs to) on a condition, and reset as soon as we - * know that the condition has succeeded and are awoken. - */ -struct rcuwait { - struct task_struct __rcu *task; -}; +#include #define __RCUWAIT_INITIALIZER(name) \ { .task = NULL, } diff --git a/include/linux/types.h b/include/linux/types.h index 2d7b9ae8714c..f1356a9a5730 100644 --- a/include/linux/types.h +++ b/include/linux/types.h @@ -248,5 +248,17 @@ typedef void (*swap_func_t)(void *a, void *b, int size); typedef int (*cmp_r_func_t)(const void *a, const void *b, const void *priv); typedef int (*cmp_func_t)(const void *a, const void *b); +/* + * rcuwait provides a way of blocking and waking up a single + * task in an rcu-safe manner. + * + * The only time @task is non-nil is when a user is blocked (or + * checking if it needs to) on a condition, and reset as soon as we + * know that the condition has succeeded and are awoken. + */ +struct rcuwait { + struct task_struct __rcu *task; +}; + #endif /* __ASSEMBLY__ */ #endif /* _LINUX_TYPES_H */ From patchwork Sat Jan 11 04:25:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13935773 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C09BE7719A for ; Sat, 11 Jan 2025 04:26:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 84F4D6B009A; Fri, 10 Jan 2025 23:26:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 763796B009B; Fri, 10 Jan 2025 23:26:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 590526B009C; Fri, 10 Jan 2025 23:26:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 2752F6B009A for ; Fri, 10 Jan 2025 23:26:26 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id DABD514108F for ; Sat, 11 Jan 2025 04:26:25 +0000 (UTC) X-FDA: 82993884330.27.2070D2C Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf13.hostedemail.com (Postfix) with ESMTP id 2415720003 for ; Sat, 11 Jan 2025 04:26:23 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="QHZa3/aG"; spf=pass (imf13.hostedemail.com: domain of 37vKBZwYKCB8NPM9I6BJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=37vKBZwYKCB8NPM9I6BJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736569584; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7fOHsmOPwrKtQ1EzbWANyGDHIimQCM8l7vZM7q2M3lw=; b=2xeG8J/qEbMcR1yq4j4yiVevah4/MsHcM/QlwMx3h2W2e27fMZ5BT6BaIsnI8TFgOMuLwh 1NYzXVA5TPLVTM1Z40iXX6QCxIBIpFhhp+M7gFVCjWVr0/PuQbnyAIs2ZrblLfk2aWQ+bV TbCF73B2EEi1PjbIef8RBbPWFrzMYRQ= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="QHZa3/aG"; spf=pass (imf13.hostedemail.com: domain of 37vKBZwYKCB8NPM9I6BJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=37vKBZwYKCB8NPM9I6BJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736569584; a=rsa-sha256; cv=none; b=LT9DGts+1mdImJX9NaqZdJTyKOvugNTNs8pTWTzhmrBJ2WW9MAnMrVktv7lN8kx0hOkp2Q DjrxEK9rnLQJXd/qrTZohRjqMeiln+xRCRfXFhr2WXDl5awd0g1Ua1SeA2Tq0fN3yfZHEQ 3x6kmx2gC4gHpo5YGTfPC3IetCUggUE= Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ee5668e09bso4895766a91.3 for ; Fri, 10 Jan 2025 20:26:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736569583; x=1737174383; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7fOHsmOPwrKtQ1EzbWANyGDHIimQCM8l7vZM7q2M3lw=; b=QHZa3/aG0n5VWPlzWLVi0ul+PJP0/y7tl7bi+p+gHCgqLpVstKxUU1dlMcsuH25V7y ly0vj8bjLnylKWbSqoGWRDnBGnv840cxP2ABUqvf6tq18of+1LkquGx4pXe2yLesjGsp kQDtJkdPfDB0mCJhmR9F3eAySQdTuZAxh3nTAAHE2x3BpmdbXPe/15X+tRFxhyH3SUDv q2YthShq1PR0Wjg8a+iddEV5y/Dy6FMkMsUJ8HHxTNQcXL7Ho57PdeN9GbQxqby7CLH8 3Shw8t9L1r9c98rJfSVWRi9RNgPhu0LBujtOsrGOFXcON/Fwqxok2tEPIbwPW/Hq2I7j +xUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736569583; x=1737174383; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7fOHsmOPwrKtQ1EzbWANyGDHIimQCM8l7vZM7q2M3lw=; b=q96lMXos6B1vRaN8tb/hzPnFo0GRpuXceY8S60ag443jsM8ekqFRSY1SToK78xkay9 kGKtAlB9TkQc9hto4NtBxdx+CsOUwQeSZR0BcNsMlcfZqcvW9cksY64AFfRhmGgGpmIt vhRVx7XdNcnG2u94VMNRlpJ1eYb+6/21FXA33dqMPqnN3k6A8MaXCdrq5j5sfABDXnxS EXgvr1VzvbBbhAdfO6R9SgvFIYKDkFK07Tgksyj0zUuGZKWaixs274A/lqK67Lz2xvjl eYbhQ7vzhN9Gg1UYQY1rzpqF/vfgxMeU+foBeigvA2drBsAPMrO99C4YsbGCZXMqZBxF 4+Iw== X-Forwarded-Encrypted: i=1; AJvYcCW2SDQhSDSA4SvANI+raWtd/xka4yNYwo8w85kxBZSF2dTAx3HzfcGwjrqJ2QmjGrSRjYxp9zjDyg==@kvack.org X-Gm-Message-State: AOJu0YzdLKAfx25sy2yXBm5uKdH4AjRyL9K++Pbu52miRbxFan/vLgsX neAd00l0MlVJlvCaWHW3+nZlAqx+hHhxKdIeOAHX1N9jUe8NF7GID0YgXR+TEvkvAPIY4QjYDAz 4QA== X-Google-Smtp-Source: AGHT+IEri6oWvbwaR9NKB8OaH7pKRVw+NxMfmWZU4bRXuSK8tL89q2iubRxHeJuXJzsmqdnV3Y+AwYimngo= X-Received: from pjbsl14.prod.google.com ([2002:a17:90b:2e0e:b0:2ef:8f54:4254]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90a:da88:b0:2ee:5111:a54b with SMTP id 98e67ed59e1d1-2f548f424c7mr16967174a91.31.1736569582930; Fri, 10 Jan 2025 20:26:22 -0800 (PST) Date: Fri, 10 Jan 2025 20:25:54 -0800 In-Reply-To: <20250111042604.3230628-1-surenb@google.com> Mime-Version: 1.0 References: <20250111042604.3230628-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250111042604.3230628-8-surenb@google.com> Subject: [PATCH v9 07/17] mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, david.laight.linux@gmail.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 2415720003 X-Rspam-User: X-Stat-Signature: xt6bynqs7gat8ezdmhwfeo7gtwje8uni X-HE-Tag: 1736569583-43481 X-HE-Meta: U2FsdGVkX18rP5TS2F0wD+0Q/kQ4iyEJAdv8vdZETTCgWz3d2ttuuBp4tQzEGSsPj4CtLZfeF+Bn/M/5RZcuT2VbSMgaaOfgxghmGGnRSVN0uRurALySlaWQyWUCcKw0LpwZwQfcuDJcIqCdcozBjNHXzhW/SqKYLDH68U4Suafs6MiJXaUHZZ5oeErX7AVsgCsthNDs5S0jyQ1S8xP+WjKZSeBTwgOPxvKjCYgUNWbh7QeYeg2qOgHBLBrbMeBnZ9vzaMXo9a+peoEzJYfGjqvTW9N9Ponwo/9/HmH2N5J/B2dUNb2sR6Gd75Sftuk3pEJK36lNH2oWl/cDSvxejOBu6pqnslZCWwZpqB0lDIyrSyQL/dcfX4ge+lNbBQHOtZq1/pnXG6sjVHd3gHIXOXUxcVJLhn29tvilZOz8Eb6732YBmd4aqayqI/WGgrA0LOVgOOxB8+DNbqSKhwNCPFry2WPEJ182I5/kwNh4wKqBuXclK8W3unwLwj+qhfty1TifPMSmTnwK8FaDs7i6467vywRY4BxnbZX4SFNxtp5K+MvXBxd8IyfTd0B6trBFfBBkbFdk/g2oyw+swAv5oLFmX4sOYoGTD62ymMKD3cpBKtZB4XQFl6OPJR+5h+eCP9A7OS0PI6imL/7WbIU2hrBq7Jgx7PJzGJprHa3dcUi/iokdgcZ5sUrJzTnKEedQAJfWEgFjv+FLxM4Z3XnQTvmbfrAcKSzpaIyxwd/K0kYQCv9oqVbU28zZnCl4t9CpV9e1/ulH6GFeFhPJ3Ramn5ZYcMftLsitfCf9mhjtwJd+msBSKg5k00J8dlNYvx9M9l+byNtEgXyQuQuXSq8oer9HLRLMfDqr5yWinYzEQvxydnRg9omWZc81dtTtRNjyAUdGkIUi7RNB9/BkB4pzciaqmm3jpPMvPH8+wlM0T6dWBqq6PZiG3MSRXLZMp0X+zp+KBJ9STp1PC9Ztkeq SCMC+Z08 Vc+pNQLs5Dn2cdZTe5YFsnlK9mwZbOjU7eO3l1oectM4ozTnDzmqbtrCXFJ2f5xJsQh/OxV6V1DFPEo8uPh9jdatcO8b+v78wed6MKJmm4fVjEm0fv/ufy9n/JZXe2jm68vz4ofU7ujzYbqemwiXuEPGcxT1rrf+V6g8U04SlaxYQ3C3ay8wWW/cwq6Hbq6jYaM8qFn9/Tms3RNZCmexMDLBbpCdWep7WMl5gDajm7+TJIe2sDV/VOFWEOIcmlt1cMuXmdZIVlJV3+6E7JOF6Gvm/tsM8jlhuFP9wslDGW50MYBD1aeO8HLJVwnuq8nUnRjLZhqyKhY3Qe+x7e74bsqgLlnQscYmCSMD2m/QaoVDwOtkNNSoWQcJci0NY3xqajDMo0H6IZf9FcRV9aMps3+7R6XRoZjgs8x46GegvQqFVd+d00pYeVKtK40JwWUhFEpUWBTNh3UPEjy+VLLhlEAiwu0qXIp6Lj7XBayXc7j+wpg3v1I/TLa37nTeocWsCQfe6jPTEd6fUCn9YTiEbBqpKN9NeFnka0FtPAh87TGuXC6Y3hl0BryAQ1g7gadzvG+V81whIhigtuPmmFWzrzFi5ndyYOplA6QHKC/I5COtZScEVDVVWOKc5s2lwyZz/Iz+Jq71QHaCCbBwt0erTfkODcBJuH02eY2/kFOli0XB3RbphTb790OoZ/I8jatTYzB2xqKXUPAQyDOkWz4Ht8TtmEPZ3Whh2BP8KozHcxg6zAnw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: With upcoming replacement of vm_lock with vm_refcnt, we need to handle a possibility of vma_start_read_locked/vma_start_read_locked_nested failing due to refcount overflow. Prepare for such possibility by changing these APIs and adjusting their users. Signed-off-by: Suren Baghdasaryan Acked-by: Vlastimil Babka Cc: Lokesh Gidra --- include/linux/mm.h | 6 ++++-- mm/userfaultfd.c | 18 +++++++++++++----- 2 files changed, 17 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 2f805f1a0176..cbb4e3dbbaed 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -747,10 +747,11 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * not be used in such cases because it might fail due to mm_lock_seq overflow. * This functionality is used to obtain vma read lock and drop the mmap read lock. */ -static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass) +static inline bool vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass) { mmap_assert_locked(vma->vm_mm); down_read_nested(&vma->vm_lock.lock, subclass); + return true; } /* @@ -759,10 +760,11 @@ static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int * not be used in such cases because it might fail due to mm_lock_seq overflow. * This functionality is used to obtain vma read lock and drop the mmap read lock. */ -static inline void vma_start_read_locked(struct vm_area_struct *vma) +static inline bool vma_start_read_locked(struct vm_area_struct *vma) { mmap_assert_locked(vma->vm_mm); down_read(&vma->vm_lock.lock); + return true; } static inline void vma_end_read(struct vm_area_struct *vma) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 4527c385935b..411a663932c4 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -85,7 +85,8 @@ static struct vm_area_struct *uffd_lock_vma(struct mm_struct *mm, mmap_read_lock(mm); vma = find_vma_and_prepare_anon(mm, address); if (!IS_ERR(vma)) - vma_start_read_locked(vma); + if (!vma_start_read_locked(vma)) + vma = ERR_PTR(-EAGAIN); mmap_read_unlock(mm); return vma; @@ -1483,10 +1484,17 @@ static int uffd_move_lock(struct mm_struct *mm, mmap_read_lock(mm); err = find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); if (!err) { - vma_start_read_locked(*dst_vmap); - if (*dst_vmap != *src_vmap) - vma_start_read_locked_nested(*src_vmap, - SINGLE_DEPTH_NESTING); + if (vma_start_read_locked(*dst_vmap)) { + if (*dst_vmap != *src_vmap) { + if (!vma_start_read_locked_nested(*src_vmap, + SINGLE_DEPTH_NESTING)) { + vma_end_read(*dst_vmap); + err = -EAGAIN; + } + } + } else { + err = -EAGAIN; + } } mmap_read_unlock(mm); return err; From patchwork Sat Jan 11 04:25:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13935774 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 132ADE7719C for ; Sat, 11 Jan 2025 04:26:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A4A796B009C; Fri, 10 Jan 2025 23:26:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9D3346B009D; Fri, 10 Jan 2025 23:26:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 825D26B009E; Fri, 10 Jan 2025 23:26:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 5C9206B009C for ; Fri, 10 Jan 2025 23:26:28 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 220EDC08EB for ; Sat, 11 Jan 2025 04:26:28 +0000 (UTC) X-FDA: 82993884456.01.2611992 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf05.hostedemail.com (Postfix) with ESMTP id 50177100010 for ; Sat, 11 Jan 2025 04:26:26 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=aQ68T5Fe; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of 38fKBZwYKCCIQSPCL9EMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=38fKBZwYKCCIQSPCL9EMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736569586; a=rsa-sha256; cv=none; b=h1eLSUPeWstt4HOVLRBjaRAAHaBBmyB0hDaQYL/XDxiS+dPtznYc0T7DKGigi7hqy5bQNt nq0LeFhxYSny4+M+2MO4iZDyuEMpLfbQiHTp6cIlaogs7z97smcAJrzElUAXLiUh7OvF43 0gtdk8q0xFm3Tjw2kyFsGqdlvX+4Gl8= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=aQ68T5Fe; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of 38fKBZwYKCCIQSPCL9EMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=38fKBZwYKCCIQSPCL9EMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736569586; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mjOxH5E6F+PRJReyBLOWVSfVgJNT1AsQeggvw/y9xao=; b=07B+wW/vOeIOK8S4wjslrxETGSAC4vxpcRJe85Y3Uo2A4tIGmm/6KyVXeyvMk8F+GVbNfi 60UMWtQhlV7M0yB/IsUrJh2L6m6Qe5teDFZXl9Gs3Cai9zDExOYma+PauLf9tXcbBk4vrV xY1Cd+4+Dfo81PTsEIiQzVPHw7VyJLc= Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-216717543b7so68521335ad.0 for ; Fri, 10 Jan 2025 20:26:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736569585; x=1737174385; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mjOxH5E6F+PRJReyBLOWVSfVgJNT1AsQeggvw/y9xao=; b=aQ68T5Fe1+DDOf3YEMmWSISPJwD/G9GhJuwZNsPTHuSraSbJuhOPLt1+L1humGwNlL RfntR1Z62KIH7+2jQmVCpDCgkFsYY6S4oAyjwXs4AodbLjsToaUpcKViXzClb20iM3VX 2wvQWRqnnbfApClyEiY3M1yDjVOkZC80bD9JEA0RO0uhLA9eHXj3H9Hm5MnZHEI1+lHN iO1+zBzXYJl3wmNKMMrX6+eOJhGFmLx2Ih7F6UIGXdoR1anO6mRg2stZGEtDVGW6q6Lu SLTemDmFdfZv1/9IG07Julj+4y4zWisEA1j1v6jykoh3I5XlK0U+Hc/NNdOAEKLOo4vj lkIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736569585; x=1737174385; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mjOxH5E6F+PRJReyBLOWVSfVgJNT1AsQeggvw/y9xao=; b=tbaA6vAIPJQ+kuOj+s0V3UJLgtt49uWrYd1BWSfslxNxChcwsz5gA/sJ/EAr3Frucg x/ot2jRa55O1+YqLIXKCNEzxAZY38uZlJAPweavkkIRsQ8W5DFIU/PXIG7kErD1bA/LF YBrFLVzdDs1ywlSNjSw/wutpocKsyQD9ov8Tw53QrbJBZaUify9I8LVdxZKhPYEplg0h WoKbcXoE39QLMIXBn2gnpKXo1Q6BjsmaK889bT+4YWMHMEGeQGyNf21/xF5VI3SQpux4 pIu/MJBA3sfhMkklwTEOMrKgM2nj5XzJZVILOmajyNWShWcnAna/lmW2l3KlrWHgvB6q BWgg== X-Forwarded-Encrypted: i=1; AJvYcCVTGoGGpz09i6BYqiIuuGLjgrpnRUHM5wMMoCg+o8El6yTodnt8gM3HmdvcUSzU4AZlmkD9ig/NtQ==@kvack.org X-Gm-Message-State: AOJu0Yy3GWKMygTGUBnQCPsRwRdDAWiJWy3jjOsDU0EmUiRfUDWdf9rL XSgBd1qpKP1x4XuPfyyOG0H0EnIQllJ8JBCDxorhTfRqgvBlTAxTfP9GmkbYTOpWwHMYcs3DlZJ huQ== X-Google-Smtp-Source: AGHT+IE0/9yJJ25TvfX89iO9TEsxGGT3C6giF06ZQXBi5U4FgLqynwZmNkiGXRge1CwIKkM+8jgi9u5PJaA= X-Received: from pfus6.prod.google.com ([2002:a05:6a00:8c6:b0:728:e1a0:2e73]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:7e02:b0:1e8:a374:cee6 with SMTP id adf61e73a8af0-1e8a374d574mr11673994637.6.1736569585095; Fri, 10 Jan 2025 20:26:25 -0800 (PST) Date: Fri, 10 Jan 2025 20:25:55 -0800 In-Reply-To: <20250111042604.3230628-1-surenb@google.com> Mime-Version: 1.0 References: <20250111042604.3230628-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250111042604.3230628-9-surenb@google.com> Subject: [PATCH v9 08/17] mm: move mmap_init_lock() out of the header file From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, david.laight.linux@gmail.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Stat-Signature: j3dguu9mjiqr6t6o3946qzpdmj4etgru X-Rspam-User: X-Rspamd-Queue-Id: 50177100010 X-Rspamd-Server: rspam08 X-HE-Tag: 1736569586-858784 X-HE-Meta: U2FsdGVkX195eFR7cl0O9l+wMjeuT1P9LDjudrhUi5MHixsn9BShVAzQL2X8I7GmS5UTWRUemvICzT/b2HsmXY98bWPN+RMJ7SbVZlLD9IYncX/xjiMmHCBWtHpdhhp0CkkFNIvy0t2mnyVd2diHwJYf+xgmdPrAkcGCmEoGWxf8wF+46t+17qnVTXYN5RJwlW6Juv0UjE3NJRWG4gMlQqp6sHRec0RynVxVm11h/qMn4QjvCJc1NlCO/e2t0Dcgi1Obe361Om4RiYpbddAIype3qO8jz4BMANT6XPPvPQNhZD7kIv8sKwDBZp4pfakuFne5fixz4iFKqgW0EePjL5/vMyap9DV1z7FSagEGVcEmyZRxy6vY5U8VGc+M13F7cDvRlUtY05CpCdFvi05KIEQNZBu8wnMcxUJjlIGvBuV8ImiGZlrMf4Lpd1qRHTXlww2EUgWXVW3n71DXDKLcWb5Bo5pdrefqLmtHieymqSI9k/0fkg6m5/fUjhuIcArMSqXG6xKFni8MNrHwG6/4Xf7GZlYt9Vyyiiycp5XAodAOa/8VEcsXhedxlpdL5W7NNyyTrd9SNahkExW3k4zr3LMVhMZv9ZVVI2C6+GdlWFv0x/pQEDNth9a6jZax87S/vxsFyVEbRZ5xh4BSOEcRKkjVGSCxckRIzyk/VinqrdXXvC9BgB7XZ1pTF76ha139AmiLkeScZFFrNrQ7LLZ1Waug+o3+0bqpRX1fgMU/ZrTzxYlJ+BBpqEP/CYiqBE0MaI0Us6s+y03pqeWooL6kt596ZsZUzfSXzfflpbIx9xTHQpD7CJ5SRSoY8nOjNrnGUA7iwaQNq/zeR4GXQ+fjqQCh9hHDuiycMesHS32wCHBrE0/Vm3GVHsXuDHUGSjzARnQD6lDk6q0Xwme/hjbprneMVDuhpypZ4LBe9V8/DRcLnoRv2D83vPcQlCi1O3LVmrSLB3VxFloNW0iZ0wa 3bW51BPW Uz6MzTD3Q8FykKtzAYxJuzan9EGMXxTi7LwGz9oMeGEcCwkfxhDQPmt1v3EVDE91yziSsfFa86dyUbTyC1+eLOPfPGMEs7SKtlpCCQHMvpwYaCLWps0Zw9+Oy8fc7Iw+0vxhYGszFvBB49a9lzIDTGPCdlimJSgAYJ70g+OwA0UVM0xLdLQhFLAPBQmNPZLjfEnBqh1FpUehKkxOX9QsBQuTRZzqUbwypx5HLng+WyPDyCSNMJxASTENdYP6PDG3Y0VnYhrMZ6v+ilMmnmS7qc46uVwu/lDNe5wxnRY8MQdXNJ5OfkiTOz5rkorS/2NgpDHWCtJFGv24U39Ugg6c4zkd6A9UhKpvL6GJ4uIjovta8RrDWbDB3jAlIxxUp8DjXkCJ/FC5YRxLvYzd6Z6fVQ7VMuGlVgOUYtIRYEG8dZjfPdorqYAXv45uLbR+VkLeQLibNBGtD5v1rbYP2/934a0T3pO1C6pKWbSZ0hLvy/0+9cRA2ZiAR8BO0ufUZoEVizdfUPcuM5v10PwvVd58mOFgU48OrUjkva9MVpiPwxumgqdUnFnvGjpXub8d4NA9IzuUoNRkHKK4/RzkSjTli8yOaf45oEPEtbmn0OPSymVt9zCvekjAxz5eMF+bJTCPj206r1UZy8fuuXuLQ2juPaP7E/cVC93BTW56jQL8xLGIzhI12DH+hfZ+apF6CWRwOYS/mWduwIn2itqLD8Qn+Hbpzf0KSiecnOjIMx1MapWHj35c= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000001, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: mmap_init_lock() is used only from mm_init() in fork.c, therefore it does not have to reside in the header file. This move lets us avoid including additional headers in mmap_lock.h later, when mmap_init_lock() needs to initialize rcuwait object. Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- include/linux/mmap_lock.h | 6 ------ kernel/fork.c | 6 ++++++ 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index 45a21faa3ff6..4706c6769902 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -122,12 +122,6 @@ static inline bool mmap_lock_speculate_retry(struct mm_struct *mm, unsigned int #endif /* CONFIG_PER_VMA_LOCK */ -static inline void mmap_init_lock(struct mm_struct *mm) -{ - init_rwsem(&mm->mmap_lock); - mm_lock_seqcount_init(mm); -} - static inline void mmap_write_lock(struct mm_struct *mm) { __mmap_lock_trace_start_locking(mm, true); diff --git a/kernel/fork.c b/kernel/fork.c index f2f9e7b427ad..d4c75428ccaf 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1219,6 +1219,12 @@ static void mm_init_uprobes_state(struct mm_struct *mm) #endif } +static inline void mmap_init_lock(struct mm_struct *mm) +{ + init_rwsem(&mm->mmap_lock); + mm_lock_seqcount_init(mm); +} + static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, struct user_namespace *user_ns) { From patchwork Sat Jan 11 04:25:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13935775 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E62BEE7719C for ; Sat, 11 Jan 2025 04:26:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 086606B009D; Fri, 10 Jan 2025 23:26:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 012456B009E; Fri, 10 Jan 2025 23:26:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D7ED56B009F; Fri, 10 Jan 2025 23:26:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B40F56B009D for ; Fri, 10 Jan 2025 23:26:30 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 679CE456D0 for ; Sat, 11 Jan 2025 04:26:30 +0000 (UTC) X-FDA: 82993884540.24.864DBC7 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf12.hostedemail.com (Postfix) with ESMTP id A227F40006 for ; Sat, 11 Jan 2025 04:26:28 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=qPndwAph; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf12.hostedemail.com: domain of 38_KBZwYKCCQSURENBGOOGLE.COMLINUX-MMKVACK.ORG@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=38_KBZwYKCCQSURENBGOOGLE.COMLINUX-MMKVACK.ORG@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736569588; a=rsa-sha256; cv=none; b=K7ILVmD59+E5/E3jWrOGbXIjugrX+VkOYS6Ijvbx3Kju7FD9ZNiFItksSgfes1+aSllj+d +Wc1DWo9+qaMaYwIEx1Mam6B1ZlguXFpNFoyTrPs2lOsDeS3BRxxLtdzIqG2YXKFp+yE3d 9t5bv/jwQHqebm0DDAX4cUO7NEhB7yY= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=qPndwAph; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf12.hostedemail.com: domain of 38_KBZwYKCCQSURENBGOOGLE.COMLINUX-MMKVACK.ORG@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=38_KBZwYKCCQSURENBGOOGLE.COMLINUX-MMKVACK.ORG@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736569588; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=STfGBHDxpr+nljSTXZBRe7awpLRa2H0ex1QXNY89OAU=; b=HbONVYlDkVtnn/N9exz6fqbxNbVjNSMjbBJNaKoYfazbTPYpP/fXRG+ShGy2Sh4vRWIQOL I2Qfg7pprAoIYOZzoOtX+chPohsqPjLG8V6HTMtxEZeYazoK85RiMdqklK9nGWwGNiUOk3 ZKSfW5CqPEUeSlVm/E7puTBVMjx5pSs= Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-216387ddda8so52048915ad.3 for ; Fri, 10 Jan 2025 20:26:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736569587; x=1737174387; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=STfGBHDxpr+nljSTXZBRe7awpLRa2H0ex1QXNY89OAU=; b=qPndwAphVzuRyn8cH68Jz+GfGJMpm8AhKru9u4luDQvp9CLnVUoNdcn81kL6X5Fr2W iqbb0y0/0E9Obj2r6tzHx3329N/3bg5fcB4G8qRm2G5wiUe+xHbF7Vkb5n5Se7iv0fsE kOmvel5UZGtwnnty0kv34Ed+BfQ6OZLxggNYMg6X3Lx31UvGPIoAwQeJFnvB7Voi39/F oqq8RUXbPzkL1+s749/GOormlaFppnTpAnvq2mtz6i87N9O7+bcamWMdRRhkFuAaW638 ISyfcg5KYkNERO/qK2p8nfLoDloCMum3PnVq+yN3C1MPt6hRVYX3/qB2cMDOYLhEVIy7 wXBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736569587; x=1737174387; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=STfGBHDxpr+nljSTXZBRe7awpLRa2H0ex1QXNY89OAU=; b=Xn2K1XLWXYKmsOdE+9Yp0WbC+uxDW2jj/tBGmRUhzvfhDwBKpg3EKRiQ38U5a4XPeH l0TttnAXs++1CIIAlTg+tgT83etJCwZTeIcAsiq3mA10CGRK6x2BpaAUbZh/nAmlqey1 +rN5ff8i4wJOho6XFZ9vqsVvc0KRsRiiR/4YHSHlGI9hYR2cfE55kq4GvUraxp3bfKPT peuKNfM6NeeU8zb2kqq+97oJLTsH1iU6OLnb6bmeE3V5LZ0Nmnk2IPjid4PjIS8LFcCX hG3l2uxTLd3M7NxRBHYi0IgccRsrx1c/vXQ2dmnoJjYK82IG6pOPqKjF4aH+eFovl0x8 4nNA== X-Forwarded-Encrypted: i=1; AJvYcCXYgE8sK3Exu1CmCbAR6M/ScZrhvFDumJ2rnW181Pfny4T7Lt38zPRfiSQ9XiShJ4KRWHf2PMX7xg==@kvack.org X-Gm-Message-State: AOJu0Yz+bkk/BT/NCaEOx15tU9bGgnCTNbunNlXo4S66fIRzr8VJDvoW dR6aL0NDARA4frZ39k72yOwuFtpLpXEFKWxjC8RMk5xeEeokP5BSo20kSGbTSUQuxBexGvq9PYW cPw== X-Google-Smtp-Source: AGHT+IHvoBVJY80+e+qdsj3aPRMB2UfI+za8DGuxRrXd9nOwFJGkON2pj+F8zNabSAnJMgvKZtfO1mpbjyw= X-Received: from pfbkp5.prod.google.com ([2002:a05:6a00:4645:b0:725:e6a0:55ea]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:1489:b0:728:eb32:356c with SMTP id d2e1a72fcca58-72d21f459dbmr17649276b3a.11.1736569587302; Fri, 10 Jan 2025 20:26:27 -0800 (PST) Date: Fri, 10 Jan 2025 20:25:56 -0800 In-Reply-To: <20250111042604.3230628-1-surenb@google.com> Mime-Version: 1.0 References: <20250111042604.3230628-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250111042604.3230628-10-surenb@google.com> Subject: [PATCH v9 09/17] mm: uninline the main body of vma_start_write() From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, david.laight.linux@gmail.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: A227F40006 X-Stat-Signature: pc6h8oyekqaiz8prey9c8b6aa879kia3 X-Rspam-User: X-HE-Tag: 1736569588-621418 X-HE-Meta: U2FsdGVkX19ZfUfq4B+5Rlvh5WPFcxtoA74RjRCOhNNumATlX6f82hdQRfs6xbcue/JVxqeycu52LheE8Srgm0e3CghjpuSmj/Sewm/yTJXQn0TX+WD1L7dLzcJe/A0zt+E0Y1udpHlJz+FPkbZFDHuA4zrTP5mEsotPyk1AhPKyF7l5+0aUWqnZyeWwWPbnW3kPZyOTGtBhTqVcWylElMtgAvPAJ4XlEmWk7UwIqa1j4SfPEZoPegJxhTpsXgJk56Bnah2opCrA/NWmhGatUOfi2/afKGRlANnLRdKaVP8TRMOUK1eOgAULmIw2r6tiEsT235QkkWudW1wRnPvZ/Tk4zBNNYx/NgbDCebGgfbN9+omNU7/ZMf4HAM5lsWkTgemtVIVNYpkMeJe0RQOvgBkWiM7barq3rC+mYM3tALKs3Xade3lqHsVWx6J7viftblXAz5p0t+Vm65zBAYpDA1In6Vmh2EgMQto3mpKaTkILxsuPG+1v0yEpH+liXuG67UttEllOqaTP4lv1Mq3FCbr5e2A0oO9VZ0yAsJiPVjNW1arj/KHZLu73mvTqvaEBPh0yL2yrVoBS9o2WUtPmMFEWN0q65QH2VpERBue226f9e1wknH788tuCrS8Ihzw2Shyu/3bS8gMktIBMNPuehZdsKpCe664cLZ09VfFsMaWDLvfRlUFxRaPTCKw2bXp9yqUC8N3ShF7hNg5RfcAWxgBqunILdM15mJYQBC70zbverNrThUBQ0shwn+qcWzNwGlXClko5h25M/vhnnR0dXkpIT5aQy157KZ7G19D23oZP8SLnUDchyvkjp2tIckBVKC+Y+34Ibo1q2q7cedJ5i/vE4UiCX4ocvsLDuAGJ8iToSeZuCAqodzuK3dVRZPc2krKl7kRuE49jHPKhGb1TUMi8yNEpgL9MTaXW9vXzpadpa+meHcF0z/g1KQkCh/rF43Z9griLfVDwK565G+C j3bmETzr rjz2kS2OuJnNAlSH4HYTg3BO1GjxZrEfrNaF8f6Po1a8Hea8+oak7RejFNG02xHsY3Q+Pp2bpl75aYiTob9w4L6qPTLaB9AlxkVo62frrlhOBz698J1x4yyBZCIig7qcu8A/9oi85RCr5DPbRXTQ+ob2QpMCclUvJiah0lgaW8rLNuC6UczuwQKIvV2ei5Q6jaYtvdEiKY/CP80q2csfz+zzMWhnxI4Wz7Nn+6poH5vBFx6nchPk5YrNOec9URWRg2hQ+BsETeerboEKV+dZkePTA7kGjBtgm8riBnHrUz00lxL7rlF4Zlcg36ND5kz3JdElFptld+MAwsuNLOvrQ9MoP1xu6C7ZICFZegjb2IV5vV4yfVVI2L/PkGtGgH1GG0DwH3nkVjI1emM5ZcqysZDwwPsgwG67n6mAv/f5rEGB7A52/XFFj7yu0u8EEdqxR9wgygNGpmBweo3X73s7vrW0MiGeXKTqIy2EuOovpPHR14UcJ4xFeCBIwV9/gCW4PihppAPrWniIDjHXN0Kxw5BWLgPGsr3+L3S/SCUAzcLPlFL+0DQDplE2ld0lHNySruyx/+Jv368rBjkmD1pzOl9dPx/KX+U3y/6Ntv1jkARFkTmwoZ666upfAZB+QWnc0TnlyTcr/ye5K1+HNLAIvhgKFbW/BtJNUEb1E+UW+9lvdx8jHECfLFsaI5Dc8wpbQ6KM9Bnpr2Wv82pKGWdfN6UqLjfq6r+0C+JBfTmTV/D+WXeM+CDpaxYiyJw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: vma_start_write() is used in many places and will grow in size very soon. It is not used in performance critical paths and uninlining it should limit the future code size growth. No functional changes. Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 12 +++--------- mm/memory.c | 14 ++++++++++++++ 2 files changed, 17 insertions(+), 9 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index cbb4e3dbbaed..3432756d95e6 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -787,6 +787,8 @@ static bool __is_vma_write_locked(struct vm_area_struct *vma, unsigned int *mm_l return (vma->vm_lock_seq == *mm_lock_seq); } +void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq); + /* * Begin writing to a VMA. * Exclude concurrent readers under the per-VMA lock until the currently @@ -799,15 +801,7 @@ static inline void vma_start_write(struct vm_area_struct *vma) if (__is_vma_write_locked(vma, &mm_lock_seq)) return; - down_write(&vma->vm_lock.lock); - /* - * We should use WRITE_ONCE() here because we can have concurrent reads - * from the early lockless pessimistic check in vma_start_read(). - * We don't really care about the correctness of that early check, but - * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. - */ - WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); - up_write(&vma->vm_lock.lock); + __vma_start_write(vma, mm_lock_seq); } static inline void vma_assert_write_locked(struct vm_area_struct *vma) diff --git a/mm/memory.c b/mm/memory.c index d0dee2282325..236fdecd44d6 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6328,6 +6328,20 @@ struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm, #endif #ifdef CONFIG_PER_VMA_LOCK +void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq) +{ + down_write(&vma->vm_lock.lock); + /* + * We should use WRITE_ONCE() here because we can have concurrent reads + * from the early lockless pessimistic check in vma_start_read(). + * We don't really care about the correctness of that early check, but + * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. + */ + WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); + up_write(&vma->vm_lock.lock); +} +EXPORT_SYMBOL_GPL(__vma_start_write); + /* * Lookup and lock a VMA under RCU protection. Returned VMA is guaranteed to be * stable and not isolated. If the VMA is not found or is being modified the From patchwork Sat Jan 11 04:25:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13935776 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F37BE7719C for ; Sat, 11 Jan 2025 04:26:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1E6CB6B009F; Fri, 10 Jan 2025 23:26:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 16EDC6B00A0; Fri, 10 Jan 2025 23:26:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EDD7C6B00A2; Fri, 10 Jan 2025 23:26:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C29866B009F for ; Fri, 10 Jan 2025 23:26:32 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 771F645795 for ; Sat, 11 Jan 2025 04:26:32 +0000 (UTC) X-FDA: 82993884624.24.39CAF78 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf19.hostedemail.com (Postfix) with ESMTP id B3F4C1A0004 for ; Sat, 11 Jan 2025 04:26:30 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=QggGXiTh; spf=pass (imf19.hostedemail.com: domain of 39fKBZwYKCCYUWTGPDIQQING.EQONKPWZ-OOMXCEM.QTI@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=39fKBZwYKCCYUWTGPDIQQING.EQONKPWZ-OOMXCEM.QTI@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736569590; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VwcbS0dFv6CgCNycrk/1CQg6vwwJAzf/LstwUATtKHM=; b=8bVgrGtM+JZCZ2mnpEzIpKA6KZMNqdfs3IUbPD5LaBHC0TYuLMlrlGXjOBlOjHrStdxSnV pETvAO9fQhajEG8lYq+GNSqBKAfaBBqBlAjYWTM+80DQo4h387ZbSdg0usciR4wc4fzlAR 2KHskors3q7cZ+RPVI+lbxdSykwBvSQ= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=QggGXiTh; spf=pass (imf19.hostedemail.com: domain of 39fKBZwYKCCYUWTGPDIQQING.EQONKPWZ-OOMXCEM.QTI@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=39fKBZwYKCCYUWTGPDIQQING.EQONKPWZ-OOMXCEM.QTI@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736569590; a=rsa-sha256; cv=none; b=i4eduAKr9WSuXy/zzgoRDGLgG+uCoS8Jg+hd/KU/40XJh4lC6M1lefFNjdS4PYvxuVuSJC rDtlNbnGJWZS8aDaZSbkeRXwzfmyEBdQ0X2WJt4jbIo4E+RJdXpVr6oHdvXioqw1L+/5P5 +bT+i0fKqJRGkEOjlqge0E+fD1kHL/0= Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ef6ef86607so6413326a91.0 for ; Fri, 10 Jan 2025 20:26:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736569589; x=1737174389; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VwcbS0dFv6CgCNycrk/1CQg6vwwJAzf/LstwUATtKHM=; b=QggGXiTh2iJEv7MbsRYVxEtv/edTStDA9byrloMSwZzv8Kes7h5JA8LMtCt5O8jR2d aPfTHEcBMysjV8v3C2EtGh0Bq87mOf6IG/W1hNGTEw1EgRbvHuaVDfjf9UB70RvBxc3d hjDJNLcUSdLuLfpE6jhU8Vxk7rNDMCWc+2xwOGVQQ//w6DZ5jde1YvnHS0j6JSkpnJ9s ZQ4NtQhXfLJke3R0lKIh/V9TnRUervEPV/3BCZtpcqZGT4AADqI+ffTJrbW4KRo5hUYW Tri+JZp/0etF0zlCXRAuSohysf103Y2iRQBt5DCYlNcKc7+mWNTVjFYhUOFWXU94la2Q U91Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736569589; x=1737174389; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VwcbS0dFv6CgCNycrk/1CQg6vwwJAzf/LstwUATtKHM=; b=J439SPxGNMMj2OaiceT74jpfT/Ju198kAAMkhyrX4wfAY0ge73jXvt9stp64Bv4Y1L 1zshX1AK5QyXgAxlpzsaQmMqFkKXF2m55uhEjk5qtA26fNF0goyTKyNiyIZ3UBph6Hzm fT4SDlQ/uRDHowAATKTZ0ZmvGQPSV5IZGX1vfsXBTuuaP1e0OM/gVI+fDKHbsMirYoZW 6ch7g80JnpJjnuhpIWRAcERQkcfuIC4IWlniJbzEHmXs1bLz0LA3nKzjAScDMVxqFGyP hmQDKKzmECXgKPsQigI5RDV4J6EGSDIY/BkFL87F/186dnaU2yBErqO/BYwqmR0xwZFx mBdw== X-Forwarded-Encrypted: i=1; AJvYcCURVObzsg4qMtp6AuK5lreYDDGLw7wrRNtOGeQ7can5TOzSRlg4HXCn0X178Z/CgwwOmBgVjCaALQ==@kvack.org X-Gm-Message-State: AOJu0YzJrmDSmsS5YxE5HyPl2ncZPJKukdxes6hVw1+WvNtwPFqbh9Ks +K9CAup29V73wEn7xSPeGE6nCq6AKiIQGDFgJhCzBKnOTJr6gY+gxghiEjKNcHxUCHQyD8WyClJ UOw== X-Google-Smtp-Source: AGHT+IHRdmdmZQsqrHxmHBpzkjciwsLRGCl3+kAbA70quP18hv457pqxyHwgZacevWuPeHpfl252t4i39B0= X-Received: from pjuj6.prod.google.com ([2002:a17:90a:d006:b0:2ea:4139:e72d]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3b4f:b0:2f2:a974:fc11 with SMTP id 98e67ed59e1d1-2f554603e39mr14016432a91.17.1736569589540; Fri, 10 Jan 2025 20:26:29 -0800 (PST) Date: Fri, 10 Jan 2025 20:25:57 -0800 In-Reply-To: <20250111042604.3230628-1-surenb@google.com> Mime-Version: 1.0 References: <20250111042604.3230628-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250111042604.3230628-11-surenb@google.com> Subject: [PATCH v9 10/17] refcount: introduce __refcount_{add|inc}_not_zero_limited From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, david.laight.linux@gmail.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Server: rspam05 X-Stat-Signature: q43q73rei56otgyk5j886bwtto69enum X-Rspamd-Queue-Id: B3F4C1A0004 X-Rspam-User: X-HE-Tag: 1736569590-677493 X-HE-Meta: U2FsdGVkX1+kxmP2+9s5/m0GUWytcTwG6qM1M62SCWUeXbdD5WX8x4p1+TuJCTHPlayYgwlyd7TuoZQcNeUfTPDK2ZbrMnW3ZNGntCBudc0C/JaYnjGn0uLFAi0w3RukKsumgX7QuFEPKReWCAz/vb58zS28PGK6wALnZY6lMiTpELCqVidKKADkfH9+nflVufi3tkNSv3BuZmTRHJUtk33Wa9jhdBoq9xiEE3rw7terKvqDn49AtW/JVU9Q/1w9uba5ChO7R/ikO0J8uxG7nnT8GEWMDPT/Q1vc+o3aHxsnNb+5QhjTqmv8mVo2apqfE30U2fvZ7n5tMGb6DMBLudjjH/F5dUYD2MsmOS7N9BME15KWrg+z5b5Aa0HKRarwczRhs+Xu3SV9Fl2FhWgWvLeZ5NBS+BeRcfsBKB0NEUP6oCpsMXbJwE2x3rA1JbUIzRB+/C6hmzvG3uDuPVfsX0cNshVlMYljZ/IpdZdsEBGyDSjTMUIAvw6MQtx9nwk/QOOinhsdFgRkPW8KxrhpKvmUvHbFYNqsk78nngBay5VKKrb2t3wBnzjZ2flcKE7d3qhD11cR9HQnsPcBbtA7DFbgHgBtToCdPAgSUnLcE0k+G8amdY1AinbQ7M+v7Ee7uuZWQ1Z7azgCJheZluSg+NKg/iqcPQmYkfX/ux5q5mo7jL26kguqyYjsR2tjrkz3Eq/RUiDxPxJZz2Cn/uJz961S4mSqLbnOGHL5Za72CbxqFAbu1TOao6DyycM+BR8MkU75YLot3osBYRCj3Mq6q5d2IBDJ1gpGmRN4jI8Lz/doYKR37nBw/6WmgutzGo3876qO7g6g6FAcFQIpIWs8CKpa27rqXAPRvP9cmam0RNZKjmODdseZuZw0UMGOmtt7Oe1PFOTCL1bM0F7fpJT9YvNH+CAfXTU05lJG5MJb1bzpDaPbxi1wpyd6xZwXGOMc6exR0WEdvGX3DX2KOXU EamEcOwP HcIJeAazsU92Lt/YYBbj9MliabBTwN7l2tL9lt5RJpOxL7dXCrny+ImQ/TosflbVdKdOdQSE2023KHtVjKKdPh+t88vRCLREFpe/PLwR4R3tyQwbR3fQSUENFaHFr0krWTCW+BCf+Mq691g9IlQdq2N1k+Lsrx8aV1jkiNLvfQxun61N9ZYJE4YDXEGvDpz+PIqwFCYSaN8UYvHxxQGt0TlUPefmJr1mKxyuA1wfTVjU/n7tgj3JQRZYBv7yB+M8n+Zy7d5hlzBEwQUtttoZWptTthh9P93Ltzqq8HuEFqSIdzb1cQ+28oboKRK8rt/RJlOquRL/r94LGYtYyvd3ziQEWR2OA7d23oTflxMK2e0w9N3o5Z/MKhWZjfu8r1JD44qmegcdxJvcRkXgtmSDpB0LNAHPvUiMVjVSMM96tQ8HpbKJV0dGxYowi5JtV7a3iBldOb2lWDLfajPR3ZBSrpWzXEPzYx2Vw2lpsYCyVvvZvZS1r0PU61eox2rysHI+n1ZwxfmQ40/weCJlu2Ze48jKuXxb/42Y+0wBPRhinAm7Gahh2S3P1dmqH1ACaL9ZEQNBvPaIjxuREeNWaIfbhe4zZbSpK5kN3lRlMZUU3Vg+G8pX3HIBwqUGCJ+TUi8AXfBZKlRoGWLFJbIVaA3J3r5gDID6fOKf9fqny4Rb98BuAAeTRoO3Wdiok5/W2o9V+9QMw7LajrCLRLH4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.004666, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Introduce functions to increase refcount but with a top limit above which they will fail to increase (the limit is inclusive). Setting the limit to INT_MAX indicates no limit. Signed-off-by: Suren Baghdasaryan --- include/linux/refcount.h | 24 +++++++++++++++++++++++- 1 file changed, 23 insertions(+), 1 deletion(-) diff --git a/include/linux/refcount.h b/include/linux/refcount.h index 35f039ecb272..5072ba99f05e 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -137,13 +137,23 @@ static inline unsigned int refcount_read(const refcount_t *r) } static inline __must_check __signed_wrap -bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp) +bool __refcount_add_not_zero_limited(int i, refcount_t *r, int *oldp, + int limit) { int old = refcount_read(r); do { if (!old) break; + + if (statically_true(limit == INT_MAX)) + continue; + + if (i > limit - old) { + if (oldp) + *oldp = old; + return false; + } } while (!atomic_try_cmpxchg_relaxed(&r->refs, &old, old + i)); if (oldp) @@ -155,6 +165,12 @@ bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp) return old; } +static inline __must_check __signed_wrap +bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp) +{ + return __refcount_add_not_zero_limited(i, r, oldp, INT_MAX); +} + /** * refcount_add_not_zero - add a value to a refcount unless it is 0 * @i: the value to add to the refcount @@ -213,6 +229,12 @@ static inline void refcount_add(int i, refcount_t *r) __refcount_add(i, r, NULL); } +static inline __must_check bool __refcount_inc_not_zero_limited(refcount_t *r, + int *oldp, int limit) +{ + return __refcount_add_not_zero_limited(1, r, oldp, limit); +} + static inline __must_check bool __refcount_inc_not_zero(refcount_t *r, int *oldp) { return __refcount_add_not_zero(1, r, oldp); From patchwork Sat Jan 11 04:25:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13935777 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 532B7E7719A for ; Sat, 11 Jan 2025 04:26:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 827916B00A2; Fri, 10 Jan 2025 23:26:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7AFC06B00A3; Fri, 10 Jan 2025 23:26:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6057A6B00A4; Fri, 10 Jan 2025 23:26:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 25E746B00A2 for ; Fri, 10 Jan 2025 23:26:35 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id A8940161037 for ; Sat, 11 Jan 2025 04:26:34 +0000 (UTC) X-FDA: 82993884708.05.D84CC69 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf26.hostedemail.com (Postfix) with ESMTP id BE2A614000B for ; Sat, 11 Jan 2025 04:26:32 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=YpSmWogA; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of 39_KBZwYKCCgWYVIRFKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=39_KBZwYKCCgWYVIRFKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736569592; a=rsa-sha256; cv=none; b=m50+m0nbRW5FzcPzBsfKPWNf519soUBAVM4gsZ6b3vdwSIUioqBVvYmGDL5sOnHoW6XarG 7fg8RZM7Tcy+J/KuapJOTi5BBruY36Thduf9BZIfzYfYBKCHvr99eDVcxUSPBJywZ/GSSZ Lpu4427cpnFcVcKSzGsyJuvMJsc7RB4= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=YpSmWogA; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of 39_KBZwYKCCgWYVIRFKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=39_KBZwYKCCgWYVIRFKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736569592; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KoiBLTdTPvVbAK30clnt8JF+GUWPfdD11DMH8Q5DV6o=; b=1fgRN1ehU7f84DjZI3ta1dzf399wIbSUnnfFF3wVJUX3pC3vPJKCaQl0JLhnBirnSOeP6n 7grpE8dYZMCrHY8QB/265C1meD+ypLfHnGWQDXAn96ceD4QWZLh25vdiPEATJa1bIqDdwC kjsDAS7H1ETIDLRWugdnlOTqhUHuhZU= Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2166a1a5cc4so44516145ad.3 for ; Fri, 10 Jan 2025 20:26:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736569591; x=1737174391; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KoiBLTdTPvVbAK30clnt8JF+GUWPfdD11DMH8Q5DV6o=; b=YpSmWogAMNt5u0/B6htfItnVq7gmkj1K/hUSCcESoqAOAbaCh2TX5sHAmQppIuqErL qw8AVpZTLmLG8Jgxka6Ija/jOe9DWqBPeaJUqWIuWuRVIbTQ+YG1xb3Z0cAJbSiX9EDf Q/M6Zp3oXFwYTHBfUd3YIv2ZNmj17EJ6FLEuJHrI5c0W1ovULjlaMgi2u2SoWCQyBSku LmRPdmE2WJM/bO7CCKX6koY3ERD+TrErajgBpbCbWf7nNwjaZvpSvp+1NgxDQo18o1H3 wgKI+NusGAaRbzFwsKidoAThENNC0ygQ7p9ufhlkBNObrJ+pl4KB7QhWSFrdmtvJoSwo zMIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736569591; x=1737174391; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KoiBLTdTPvVbAK30clnt8JF+GUWPfdD11DMH8Q5DV6o=; b=cfuH9TzALDev8ujQxOfVyXGI4sBR5//xN9J1dJ5n/xMvidaMxnfzDuPVHDONjnZBO4 RQTC1N8T8O2hO7zrwyBXc+ziLoqCROlRtsZyv3uhQ33h8qBHI2XpUj0LhuhE1/cg9qut B4wXxU0l/S6M9LKQm6MskAg681jo4fnqHeh25QtEAyohfKJf+nbGlr3qgXQZhLvc9bfA ETppgNMvWnAJd/hEZonVIjZ6Navd22EoMuQVbXsLLE61iJr2U1dHJECfTMXUPqpFZc8W /6btr9nzH+sa+aW9U8j5SeSmGt4vhuv1fv9PvzjbQ2EEdgxWIsk6g8sXxXDlsFEWYAwc +Zfg== X-Forwarded-Encrypted: i=1; AJvYcCX50SURT/UKAWMoqn2/KIA+JE6OzCMXTFgK4t8pKZYOapafUhLKdFw8F2c1ci6BuSmw5o5w0z+tDQ==@kvack.org X-Gm-Message-State: AOJu0YxW0bcMpZVhvF7g7NTsTS1OLg971gX/p4gx4cS2nbBkZq5Fre8U D55QwLbotZaLmSh+PyQ/N+7/gntEfHoTnKM9t1iagAG4QjnNgJduDkmx5/viu7ngZ7v45C6ZpwO ukg== X-Google-Smtp-Source: AGHT+IEX9OBw5ROoqGyPPqejgdegOwDYM53o3H0c4/BravMRmFKAEJOFN39u0ZLsc5tczaqxqbZoTLVJdhg= X-Received: from pgqb13.prod.google.com ([2002:a65:41cd:0:b0:7fd:558c:c660]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:3a8a:b0:1e6:5323:58cb with SMTP id adf61e73a8af0-1e88cfd2420mr20293451637.23.1736569591481; Fri, 10 Jan 2025 20:26:31 -0800 (PST) Date: Fri, 10 Jan 2025 20:25:58 -0800 In-Reply-To: <20250111042604.3230628-1-surenb@google.com> Mime-Version: 1.0 References: <20250111042604.3230628-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250111042604.3230628-12-surenb@google.com> Subject: [PATCH v9 11/17] mm: replace vm_lock and detached flag with a reference count From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, david.laight.linux@gmail.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: BE2A614000B X-Stat-Signature: kuyr8q84gepammgkdntsxope66kr3fah X-Rspam-User: X-HE-Tag: 1736569592-33315 X-HE-Meta: U2FsdGVkX189ODovSjqo7BU97X4YmyFsnIrTq2JqKbfavGfUf3JxNud5Uz8JUTxmpWUgMYyM2u3b2LlUnZP6kaZO6kjMWsCVXDzR+2ZwvxF5Vbe6T2kGcldrNOoMkScXmVrdD68JlkLBp45OrqkgRzp18HKX9Su7OBwxGgDdoFznHWxT6bYquqFzyHjU1mdtUX96ADkPg7ow08MsMBHBiMCNI+RzlcLOowaZL+TaX7R6ve495jtEVtaDCjgyECb3dJR6mwiBXh8nevm3/6kvxVv/G6+lHVBEl/f1NBU8oBMO5uKlDMct8mRmS+It/+hD9rG8dfasmgaGzKnvm5v2b5Xy5poikvN+MR7SoHEXJ9imfwb307wS576S1xexMjTeU2oIVpD6MPkGsqR0ersAAvu/nx55CwuXtPbBoOSr37uhV+uyerhugfgb106I5mBrLE2MwakXtYSzhq6qHfTY/zcADsxNbYu9h7c5u+fvnquc70cbb22p7G88HVulpu0FbZOS+jHTSF4LztlmNIX5z0Fmy0B5iEuV7Q60vIuOqzjMsKQ5bcaiRBcqoWdpADqAHwEHxvTf6HSS9eWURXsS2scuQK2nrjCUHDhqigtDJljjeytUtpagjHNbTJ68rxn2lBIlYKISs90IaOwv/46mUboubqrBNedpe7BZ8MU5AWnJnH0gKzYvnAJLSke1c6u1Zq1joGOwcWP9UVqp33twVgKXbDkIRQkIs33SiS/k51UlT/Drd6Bc1uITDBFRzemJ7dFYp5gkfNI0rswV8giXMbM1V5RRvLzOhf2FBXkaEj4wbRYOGyV+e1g3FqpPGYyvLauPmisiENEtlbfi8u28ZHNLDKZKqTeMNB8p7YD2APhGrSwH1NUyEDGbhIeA7+F11mD6XzN0JfyhhNjvDEqmwN7Kb2/oBoe3Gba5SYiGPzukU3Tc4nKvfm8R38KMqIkzqEYw498L7pspu/fuhqH ga86LHxU uOio+824q8U0h3JZIkLJ/UhD3w66u5CRKNCfy/Nwf5P4kcn7jixCt62UfLG33jfBbYonlMKKGEZhF+lIYI/9o94b5bj/E4DnMhH/5XKK2ez4MQqLMVkeF3gheh6cYbhaKlHu0zFEEa9+5/kZ9RgohSvQFUggaRNSxQX0hMx3N8QdM9/8Ti1lAOR0adFJwqF+XtJQktU4go44Gc4uYrb0BPuADBZsQueqNO1VHvPyca744/lZMKU1wD4Bm4EeBThSWJ4FCTm1QHXTsIRiMKtzaZTWAhFZe8tsW7bFXldLakws6L++zU6go9TiUK95inF3u2+meHOKAFUFXqJzKLpbb1rdBj/ASrNeZ6NbN8ooKFHXsSZn7loFiSRVQ7z1ZTk3l/zk2gVjlYXlN1tNLfSJgEqe3I18mEI0GeOkjtJYhG40pC+BbXfvamO5etRayuklPx1GgRy2x0RwxNyv2N+g5HTohTjUpoaxOPNlv04th2AH9YvfaMUsAUvRSQd6DMSWnwV/VeJXz2wLA4RrAqM1HROC3BbicrQFvl2kH+Ds01SaPBRbx9xOu2KXWMUOsxPs8kZDAbA7CKp3UBp1MPiqD8Zt56yB/Ypt8PHOvtBAZLJWxqwfuDPInO5h8IB6LUvGivAaVpIB3x8MgMflqvI9L8fo8hxUmKxFUVPDXTnZdUi+6ERfTS5fL54OpQ4KwoMT17jOIUU3P/WNQXDwW3mDPBo5ufpW80P8q1DJW X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: rw_semaphore is a sizable structure of 40 bytes and consumes considerable space for each vm_area_struct. However vma_lock has two important specifics which can be used to replace rw_semaphore with a simpler structure: 1. Readers never wait. They try to take the vma_lock and fall back to mmap_lock if that fails. 2. Only one writer at a time will ever try to write-lock a vma_lock because writers first take mmap_lock in write mode. Because of these requirements, full rw_semaphore functionality is not needed and we can replace rw_semaphore and the vma->detached flag with a refcount (vm_refcnt). When vma is in detached state, vm_refcnt is 0 and only a call to vma_mark_attached() can take it out of this state. Note that unlike before, now we enforce both vma_mark_attached() and vma_mark_detached() to be done only after vma has been write-locked. vma_mark_attached() changes vm_refcnt to 1 to indicate that it has been attached to the vma tree. When a reader takes read lock, it increments vm_refcnt, unless the top usable bit of vm_refcnt (0x40000000) is set, indicating presence of a writer. When writer takes write lock, it sets the top usable bit to indicate its presence. If there are readers, writer will wait using newly introduced mm->vma_writer_wait. Since all writers take mmap_lock in write mode first, there can be only one writer at a time. The last reader to release the lock will signal the writer to wake up. refcount might overflow if there are many competing readers, in which case read-locking will fail. Readers are expected to handle such failures. In summary: 1. all readers increment the vm_refcnt; 2. writer sets top usable (writer) bit of vm_refcnt; 3. readers cannot increment the vm_refcnt if the writer bit is set; 4. in the presence of readers, writer must wait for the vm_refcnt to drop to 1 (ignoring the writer bit), indicating an attached vma with no readers; 5. vm_refcnt overflow is handled by the readers. While this vm_lock replacement does not yet result in a smaller vm_area_struct (it stays at 256 bytes due to cacheline alignment), it allows for further size optimization by structure member regrouping to bring the size of vm_area_struct below 192 bytes. Suggested-by: Peter Zijlstra Suggested-by: Matthew Wilcox Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 102 +++++++++++++++++++++---------- include/linux/mm_types.h | 22 +++---- kernel/fork.c | 13 ++-- mm/init-mm.c | 1 + mm/memory.c | 80 +++++++++++++++++++++--- tools/testing/vma/linux/atomic.h | 5 ++ tools/testing/vma/vma_internal.h | 66 +++++++++++--------- 7 files changed, 198 insertions(+), 91 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 3432756d95e6..a99b11ee1f66 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -32,6 +32,7 @@ #include #include #include +#include struct mempolicy; struct anon_vma; @@ -697,12 +698,43 @@ static inline void vma_numab_state_free(struct vm_area_struct *vma) {} #endif /* CONFIG_NUMA_BALANCING */ #ifdef CONFIG_PER_VMA_LOCK -static inline void vma_lock_init(struct vm_area_struct *vma) +static inline void vma_lock_init(struct vm_area_struct *vma, bool reset_refcnt) { - init_rwsem(&vma->vm_lock.lock); +#ifdef CONFIG_DEBUG_LOCK_ALLOC + static struct lock_class_key lockdep_key; + + lockdep_init_map(&vma->vmlock_dep_map, "vm_lock", &lockdep_key, 0); +#endif + if (reset_refcnt) + refcount_set(&vma->vm_refcnt, 0); vma->vm_lock_seq = UINT_MAX; } +static inline bool is_vma_writer_only(int refcnt) +{ + /* + * With a writer and no readers, refcnt is VMA_LOCK_OFFSET if the vma + * is detached and (VMA_LOCK_OFFSET + 1) if it is attached. Waiting on + * a detached vma happens only in vma_mark_detached() and is a rare + * case, therefore most of the time there will be no unnecessary wakeup. + */ + return refcnt & VMA_LOCK_OFFSET && refcnt <= VMA_LOCK_OFFSET + 1; +} + +static inline void vma_refcount_put(struct vm_area_struct *vma) +{ + /* Use a copy of vm_mm in case vma is freed after we drop vm_refcnt */ + struct mm_struct *mm = vma->vm_mm; + int oldcnt; + + rwsem_release(&vma->vmlock_dep_map, _RET_IP_); + if (!__refcount_dec_and_test(&vma->vm_refcnt, &oldcnt)) { + + if (is_vma_writer_only(oldcnt - 1)) + rcuwait_wake_up(&mm->vma_writer_wait); + } +} + /* * Try to read-lock a vma. The function is allowed to occasionally yield false * locked result to avoid performance overhead, in which case we fall back to @@ -710,6 +742,8 @@ static inline void vma_lock_init(struct vm_area_struct *vma) */ static inline bool vma_start_read(struct vm_area_struct *vma) { + int oldcnt; + /* * Check before locking. A race might cause false locked result. * We can use READ_ONCE() for the mm_lock_seq here, and don't need @@ -720,13 +754,19 @@ static inline bool vma_start_read(struct vm_area_struct *vma) if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence)) return false; - if (unlikely(down_read_trylock(&vma->vm_lock.lock) == 0)) + /* + * If VMA_LOCK_OFFSET is set, __refcount_inc_not_zero_limited() will fail + * because VMA_REF_LIMIT is less than VMA_LOCK_OFFSET. + */ + if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt, + VMA_REF_LIMIT))) return false; + rwsem_acquire_read(&vma->vmlock_dep_map, 0, 1, _RET_IP_); /* - * Overflow might produce false locked result. + * Overflow of vm_lock_seq/mm_lock_seq might produce false locked result. * False unlocked result is impossible because we modify and check - * vma->vm_lock_seq under vma->vm_lock protection and mm->mm_lock_seq + * vma->vm_lock_seq under vma->vm_refcnt protection and mm->mm_lock_seq * modification invalidates all existing locks. * * We must use ACQUIRE semantics for the mm_lock_seq so that if we are @@ -735,9 +775,10 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * This pairs with RELEASE semantics in vma_end_write_all(). */ if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { - up_read(&vma->vm_lock.lock); + vma_refcount_put(vma); return false; } + return true; } @@ -749,8 +790,14 @@ static inline bool vma_start_read(struct vm_area_struct *vma) */ static inline bool vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass) { + int oldcnt; + mmap_assert_locked(vma->vm_mm); - down_read_nested(&vma->vm_lock.lock, subclass); + if (unlikely(!__refcount_inc_not_zero_limited(&vma->vm_refcnt, &oldcnt, + VMA_REF_LIMIT))) + return false; + + rwsem_acquire_read(&vma->vmlock_dep_map, 0, 1, _RET_IP_); return true; } @@ -762,16 +809,12 @@ static inline bool vma_start_read_locked_nested(struct vm_area_struct *vma, int */ static inline bool vma_start_read_locked(struct vm_area_struct *vma) { - mmap_assert_locked(vma->vm_mm); - down_read(&vma->vm_lock.lock); - return true; + return vma_start_read_locked_nested(vma, 0); } static inline void vma_end_read(struct vm_area_struct *vma) { - rcu_read_lock(); /* keeps vma alive till the end of up_read */ - up_read(&vma->vm_lock.lock); - rcu_read_unlock(); + vma_refcount_put(vma); } /* WARNING! Can only be used if mmap_lock is expected to be write-locked */ @@ -813,36 +856,33 @@ static inline void vma_assert_write_locked(struct vm_area_struct *vma) static inline void vma_assert_locked(struct vm_area_struct *vma) { - if (!rwsem_is_locked(&vma->vm_lock.lock)) + if (refcount_read(&vma->vm_refcnt) <= 1) vma_assert_write_locked(vma); } +/* + * WARNING: to avoid racing with vma_mark_attached()/vma_mark_detached(), these + * assertions should be made either under mmap_write_lock or when the object + * has been isolated under mmap_write_lock, ensuring no competing writers. + */ static inline void vma_assert_attached(struct vm_area_struct *vma) { - VM_BUG_ON_VMA(vma->detached, vma); + VM_BUG_ON_VMA(!refcount_read(&vma->vm_refcnt), vma); } static inline void vma_assert_detached(struct vm_area_struct *vma) { - VM_BUG_ON_VMA(!vma->detached, vma); + VM_BUG_ON_VMA(refcount_read(&vma->vm_refcnt), vma); } static inline void vma_mark_attached(struct vm_area_struct *vma) { - vma->detached = false; -} - -static inline void vma_mark_detached(struct vm_area_struct *vma) -{ - /* When detaching vma should be write-locked */ vma_assert_write_locked(vma); - vma->detached = true; + vma_assert_detached(vma); + refcount_set(&vma->vm_refcnt, 1); } -static inline bool is_vma_detached(struct vm_area_struct *vma) -{ - return vma->detached; -} +void vma_mark_detached(struct vm_area_struct *vma); static inline void release_fault_lock(struct vm_fault *vmf) { @@ -865,7 +905,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, #else /* CONFIG_PER_VMA_LOCK */ -static inline void vma_lock_init(struct vm_area_struct *vma) {} +static inline void vma_lock_init(struct vm_area_struct *vma, bool reset_refcnt) {} static inline bool vma_start_read(struct vm_area_struct *vma) { return false; } static inline void vma_end_read(struct vm_area_struct *vma) {} @@ -908,12 +948,8 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_mm = mm; vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); -#ifdef CONFIG_PER_VMA_LOCK - /* vma is not locked, can't use vma_mark_detached() */ - vma->detached = true; -#endif vma_numab_state_init(vma); - vma_lock_init(vma); + vma_lock_init(vma, false); } /* Use when VMA is not part of the VMA tree and needs no locking */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 6573d95f1d1e..9228d19662c6 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -19,6 +19,7 @@ #include #include #include +#include #include @@ -629,9 +630,8 @@ static inline struct anon_vma_name *anon_vma_name_alloc(const char *name) } #endif -struct vma_lock { - struct rw_semaphore lock; -}; +#define VMA_LOCK_OFFSET 0x40000000 +#define VMA_REF_LIMIT (VMA_LOCK_OFFSET - 1) struct vma_numab_state { /* @@ -709,19 +709,13 @@ struct vm_area_struct { }; #ifdef CONFIG_PER_VMA_LOCK - /* - * Flag to indicate areas detached from the mm->mm_mt tree. - * Unstable RCU readers are allowed to read this. - */ - bool detached; - /* * Can only be written (using WRITE_ONCE()) while holding both: * - mmap_lock (in write mode) - * - vm_lock->lock (in write mode) + * - vm_refcnt bit at VMA_LOCK_OFFSET is set * Can be read reliably while holding one of: * - mmap_lock (in read or write mode) - * - vm_lock->lock (in read or write mode) + * - vm_refcnt bit at VMA_LOCK_OFFSET is set or vm_refcnt > 1 * Can be read unreliably (using READ_ONCE()) for pessimistic bailout * while holding nothing (except RCU to keep the VMA struct allocated). * @@ -784,7 +778,10 @@ struct vm_area_struct { struct vm_userfaultfd_ctx vm_userfaultfd_ctx; #ifdef CONFIG_PER_VMA_LOCK /* Unstable RCU readers are allowed to read this. */ - struct vma_lock vm_lock ____cacheline_aligned_in_smp; + refcount_t vm_refcnt ____cacheline_aligned_in_smp; +#ifdef CONFIG_DEBUG_LOCK_ALLOC + struct lockdep_map vmlock_dep_map; +#endif #endif } __randomize_layout; @@ -919,6 +916,7 @@ struct mm_struct { * by mmlist_lock */ #ifdef CONFIG_PER_VMA_LOCK + struct rcuwait vma_writer_wait; /* * This field has lock-like semantics, meaning it is sometimes * accessed with ACQUIRE/RELEASE semantics. diff --git a/kernel/fork.c b/kernel/fork.c index d4c75428ccaf..9d9275783cf8 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -463,12 +463,8 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) * will be reinitialized. */ data_race(memcpy(new, orig, sizeof(*new))); - vma_lock_init(new); + vma_lock_init(new, true); INIT_LIST_HEAD(&new->anon_vma_chain); -#ifdef CONFIG_PER_VMA_LOCK - /* vma is not locked, can't use vma_mark_detached() */ - new->detached = true; -#endif vma_numab_state_init(new); dup_anon_vma_name(orig, new); @@ -477,6 +473,8 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) void __vm_area_free(struct vm_area_struct *vma) { + /* The vma should be detached while being destroyed. */ + vma_assert_detached(vma); vma_numab_state_free(vma); free_anon_vma_name(vma); kmem_cache_free(vm_area_cachep, vma); @@ -488,8 +486,6 @@ static void vm_area_free_rcu_cb(struct rcu_head *head) struct vm_area_struct *vma = container_of(head, struct vm_area_struct, vm_rcu); - /* The vma should not be locked while being destroyed. */ - VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock.lock), vma); __vm_area_free(vma); } #endif @@ -1223,6 +1219,9 @@ static inline void mmap_init_lock(struct mm_struct *mm) { init_rwsem(&mm->mmap_lock); mm_lock_seqcount_init(mm); +#ifdef CONFIG_PER_VMA_LOCK + rcuwait_init(&mm->vma_writer_wait); +#endif } static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, diff --git a/mm/init-mm.c b/mm/init-mm.c index 6af3ad675930..4600e7605cab 100644 --- a/mm/init-mm.c +++ b/mm/init-mm.c @@ -40,6 +40,7 @@ struct mm_struct init_mm = { .arg_lock = __SPIN_LOCK_UNLOCKED(init_mm.arg_lock), .mmlist = LIST_HEAD_INIT(init_mm.mmlist), #ifdef CONFIG_PER_VMA_LOCK + .vma_writer_wait = __RCUWAIT_INITIALIZER(init_mm.vma_writer_wait), .mm_lock_seq = SEQCNT_ZERO(init_mm.mm_lock_seq), #endif .user_ns = &init_user_ns, diff --git a/mm/memory.c b/mm/memory.c index 236fdecd44d6..dc16b67beefa 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6328,9 +6328,47 @@ struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm, #endif #ifdef CONFIG_PER_VMA_LOCK +static inline bool __vma_enter_locked(struct vm_area_struct *vma, bool detaching) +{ + unsigned int tgt_refcnt = VMA_LOCK_OFFSET; + + /* Additional refcnt if the vma is attached. */ + if (!detaching) + tgt_refcnt++; + + /* + * If vma is detached then only vma_mark_attached() can raise the + * vm_refcnt. mmap_write_lock prevents racing with vma_mark_attached(). + */ + if (!refcount_add_not_zero(VMA_LOCK_OFFSET, &vma->vm_refcnt)) + return false; + + rwsem_acquire(&vma->vmlock_dep_map, 0, 0, _RET_IP_); + rcuwait_wait_event(&vma->vm_mm->vma_writer_wait, + refcount_read(&vma->vm_refcnt) == tgt_refcnt, + TASK_UNINTERRUPTIBLE); + lock_acquired(&vma->vmlock_dep_map, _RET_IP_); + + return true; +} + +static inline void __vma_exit_locked(struct vm_area_struct *vma, bool *detached) +{ + *detached = refcount_sub_and_test(VMA_LOCK_OFFSET, &vma->vm_refcnt); + rwsem_release(&vma->vmlock_dep_map, _RET_IP_); +} + void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq) { - down_write(&vma->vm_lock.lock); + bool locked; + + /* + * __vma_enter_locked() returns false immediately if the vma is not + * attached, otherwise it waits until refcnt is indicating that vma + * is attached with no readers. + */ + locked = __vma_enter_locked(vma, false); + /* * We should use WRITE_ONCE() here because we can have concurrent reads * from the early lockless pessimistic check in vma_start_read(). @@ -6338,10 +6376,40 @@ void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq) * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. */ WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); - up_write(&vma->vm_lock.lock); + + if (locked) { + bool detached; + + __vma_exit_locked(vma, &detached); + VM_BUG_ON_VMA(detached, vma); /* vma should remain attached */ + } } EXPORT_SYMBOL_GPL(__vma_start_write); +void vma_mark_detached(struct vm_area_struct *vma) +{ + vma_assert_write_locked(vma); + vma_assert_attached(vma); + + /* + * We are the only writer, so no need to use vma_refcount_put(). + * The condition below is unlikely because the vma has been already + * write-locked and readers can increment vm_refcnt only temporarily + * before they check vm_lock_seq, realize the vma is locked and drop + * back the vm_refcnt. That is a narrow window for observing a raised + * vm_refcnt. + */ + if (unlikely(!refcount_dec_and_test(&vma->vm_refcnt))) { + /* Wait until vma is detached with no readers. */ + if (__vma_enter_locked(vma, true)) { + bool detached; + + __vma_exit_locked(vma, &detached); + VM_BUG_ON_VMA(!detached, vma); + } + } +} + /* * Lookup and lock a VMA under RCU protection. Returned VMA is guaranteed to be * stable and not isolated. If the VMA is not found or is being modified the @@ -6354,7 +6422,6 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, struct vm_area_struct *vma; rcu_read_lock(); -retry: vma = mas_walk(&mas); if (!vma) goto inval; @@ -6362,13 +6429,6 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, if (!vma_start_read(vma)) goto inval; - /* Check if the VMA got isolated after we found it */ - if (is_vma_detached(vma)) { - vma_end_read(vma); - count_vm_vma_lock_event(VMA_LOCK_MISS); - /* The area was replaced with another one */ - goto retry; - } /* * At this point, we have a stable reference to a VMA: The VMA is * locked and we know it hasn't already been isolated. diff --git a/tools/testing/vma/linux/atomic.h b/tools/testing/vma/linux/atomic.h index 3e1b6adc027b..788c597c4fde 100644 --- a/tools/testing/vma/linux/atomic.h +++ b/tools/testing/vma/linux/atomic.h @@ -9,4 +9,9 @@ #define atomic_set(x, y) uatomic_set(x, y) #define U8_MAX UCHAR_MAX +#ifndef atomic_cmpxchg_relaxed +#define atomic_cmpxchg_relaxed uatomic_cmpxchg +#define atomic_cmpxchg_release uatomic_cmpxchg +#endif /* atomic_cmpxchg_relaxed */ + #endif /* _LINUX_ATOMIC_H */ diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index 47c8b03ffbbd..2ce032943861 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -25,7 +25,7 @@ #include #include #include -#include +#include extern unsigned long stack_guard_gap; #ifdef CONFIG_MMU @@ -134,10 +134,6 @@ typedef __bitwise unsigned int vm_fault_t; */ #define pr_warn_once pr_err -typedef struct refcount_struct { - atomic_t refs; -} refcount_t; - struct kref { refcount_t refcount; }; @@ -232,15 +228,12 @@ struct mm_struct { unsigned long flags; /* Must use atomic bitops to access */ }; -struct vma_lock { - struct rw_semaphore lock; -}; - - struct file { struct address_space *f_mapping; }; +#define VMA_LOCK_OFFSET 0x40000000 + struct vm_area_struct { /* The first cache line has the info for VMA tree walking. */ @@ -268,16 +261,13 @@ struct vm_area_struct { }; #ifdef CONFIG_PER_VMA_LOCK - /* Flag to indicate areas detached from the mm->mm_mt tree */ - bool detached; - /* * Can only be written (using WRITE_ONCE()) while holding both: * - mmap_lock (in write mode) - * - vm_lock.lock (in write mode) + * - vm_refcnt bit at VMA_LOCK_OFFSET is set * Can be read reliably while holding one of: * - mmap_lock (in read or write mode) - * - vm_lock.lock (in read or write mode) + * - vm_refcnt bit at VMA_LOCK_OFFSET is set or vm_refcnt > 1 * Can be read unreliably (using READ_ONCE()) for pessimistic bailout * while holding nothing (except RCU to keep the VMA struct allocated). * @@ -286,7 +276,6 @@ struct vm_area_struct { * slowpath. */ unsigned int vm_lock_seq; - struct vma_lock vm_lock; #endif /* @@ -339,6 +328,10 @@ struct vm_area_struct { struct vma_numab_state *numab_state; /* NUMA Balancing state */ #endif struct vm_userfaultfd_ctx vm_userfaultfd_ctx; +#ifdef CONFIG_PER_VMA_LOCK + /* Unstable RCU readers are allowed to read this. */ + refcount_t vm_refcnt; +#endif } __randomize_layout; struct vm_fault {}; @@ -463,23 +456,41 @@ static inline struct vm_area_struct *vma_next(struct vma_iterator *vmi) return mas_find(&vmi->mas, ULONG_MAX); } -static inline void vma_lock_init(struct vm_area_struct *vma) +/* + * WARNING: to avoid racing with vma_mark_attached()/vma_mark_detached(), these + * assertions should be made either under mmap_write_lock or when the object + * has been isolated under mmap_write_lock, ensuring no competing writers. + */ +static inline void vma_assert_attached(struct vm_area_struct *vma) { - init_rwsem(&vma->vm_lock.lock); - vma->vm_lock_seq = UINT_MAX; + VM_BUG_ON_VMA(!refcount_read(&vma->vm_refcnt), vma); } -static inline void vma_mark_attached(struct vm_area_struct *vma) +static inline void vma_assert_detached(struct vm_area_struct *vma) { - vma->detached = false; + VM_BUG_ON_VMA(refcount_read(&vma->vm_refcnt), vma); } static inline void vma_assert_write_locked(struct vm_area_struct *); +static inline void vma_mark_attached(struct vm_area_struct *vma) +{ + vma_assert_write_locked(vma); + vma_assert_detached(vma); + refcount_set(&vma->vm_refcnt, 1); +} + static inline void vma_mark_detached(struct vm_area_struct *vma) { - /* When detaching vma should be write-locked */ vma_assert_write_locked(vma); - vma->detached = true; + vma_assert_attached(vma); + + /* We are the only writer, so no need to use vma_refcount_put(). */ + if (unlikely(!refcount_dec_and_test(&vma->vm_refcnt))) { + /* + * Reader must have temporarily raised vm_refcnt but it will + * drop it without using the vma since vma is write-locked. + */ + } } extern const struct vm_operations_struct vma_dummy_vm_ops; @@ -492,9 +503,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_mm = mm; vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); - /* vma is not locked, can't use vma_mark_detached() */ - vma->detached = true; - vma_lock_init(vma); + vma->vm_lock_seq = UINT_MAX; } static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) @@ -517,10 +526,9 @@ static inline struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) return NULL; memcpy(new, orig, sizeof(*new)); - vma_lock_init(new); + refcount_set(&new->vm_refcnt, 0); + new->vm_lock_seq = UINT_MAX; INIT_LIST_HEAD(&new->anon_vma_chain); - /* vma is not locked, can't use vma_mark_detached() */ - new->detached = true; return new; } From patchwork Sat Jan 11 04:25:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13935778 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6935E7719D for ; Sat, 11 Jan 2025 04:26:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3E3AC6B00A4; Fri, 10 Jan 2025 23:26:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 391216B00A5; Fri, 10 Jan 2025 23:26:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 198226B00A6; Fri, 10 Jan 2025 23:26:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id EB2E36B00A4 for ; Fri, 10 Jan 2025 23:26:36 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id A6D18457AC for ; Sat, 11 Jan 2025 04:26:36 +0000 (UTC) X-FDA: 82993884792.05.B9FE129 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf06.hostedemail.com (Postfix) with ESMTP id DB88C18000A for ; Sat, 11 Jan 2025 04:26:34 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=JGqnaHKY; spf=pass (imf06.hostedemail.com: domain of 3-fKBZwYKCCoYaXKTHMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3-fKBZwYKCCoYaXKTHMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736569595; a=rsa-sha256; cv=none; b=dk1Nrckt/v6AkMBs//WmNVdzOy/LNAbm9DPAiSmp6irGtuPu2gQrYH/mOaGjCq1t1AIODv BLUHcWk0rP7jpRXPyIURLQ7WMxYtqOmvjd3mR0InayattxZbGg16aZNlBiHH/nrLFGZcdT hSjrTby+wKQbC25MyPsqToSowvw7RhM= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=JGqnaHKY; spf=pass (imf06.hostedemail.com: domain of 3-fKBZwYKCCoYaXKTHMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3-fKBZwYKCCoYaXKTHMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736569594; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2TrBali5XeUoD22YNdBhZEhX/KaryyxFeSYnw61ebAw=; b=Lo89TcI6VornvvHEm9wcYgb+5Pn0YpH0uj870Ul/hcOyTQ8/RtYg7el6NaZ5HlYsiTY6kf oaRg6cQhTrz7C468r5/xq2yvfqZesthD94OaLIAL8Hizp2YJst8ovAMf67eFFz/97wq9aF YyxGD5+x76En3BkXbp4g/HVDDtjp1ic= Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ef9e38b0cfso4790788a91.0 for ; Fri, 10 Jan 2025 20:26:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736569593; x=1737174393; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2TrBali5XeUoD22YNdBhZEhX/KaryyxFeSYnw61ebAw=; b=JGqnaHKYToRWTacU7bR5rN0FggeO+7uFRALHaNKy5UMkZV17rMHqvWLWEULsDVeRFu iXkePh9HWJlODnetlZTjP6rKycRDSsJA+uCPgrRbUYcbZEB3AHnpyr0AmL8ynGfabwKE H577L668i/j1qS/wTuEzCoYRQfgpt2uHbc/fg8GF2l2wEpRgBwSbxsEjJ+ULBKcOKsni SBJEXA3c8kJ3XRKpJCzZl6RPSHoZxJpvftZvoZyGN7bukU1O49/ubGgjBQuGf0RfaV9K KO3GPrW+O21dzlVHPhCePamsg7xDn55mFrppJBcx35FJ4D4gg9AmUGh/ABoI6nrNgpVx kHLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736569593; x=1737174393; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2TrBali5XeUoD22YNdBhZEhX/KaryyxFeSYnw61ebAw=; b=kPwrTtsLv9lQW/Zv8UC6oW16ZvE4iOZXuWch5Vz4DPctdpsnu8y3sTG30YfL5joL/p 2lxd5K/orSQVupxciKDffHFW8FTqiqt26a3OFeurDpvGaMNZFQ77WCH1Z2QEoH+ZtDOA klflypwbQhg59nzLOkW+Trv3GtkREE9b/sXR6Rj/8Me3720ERl+QZMqOtDKByHTe4+Ac zYS2EB8NUOm0d5Q9WE0YwqNjsnOH+S1x+kC5fIrJtJbfFlVZ5mlfs4mr+66TVArm2h/L 0GE+dtH+2tDMdotaCgZV6pcw83MlibLRvZJwixA7LQbadlBugPu1cEgWaI/WOAGCsaU2 ceGg== X-Forwarded-Encrypted: i=1; AJvYcCXsbzUqTarTB0XMiormVQ55icA1pZSCznVFCgGe2ZgzmtxNKOp6e558eJM8EqbjU7sJ+nVLJbEcYg==@kvack.org X-Gm-Message-State: AOJu0YwSN5Mha8453Lpm9TjpQl2rhElDZpMUBa5FaxWtMpVTeCbsjIx7 8sZg87OqyL4dVtJVkhwFf+BtPU5K4NSarNdKTJdH5lqe7lydS6n/29wpsMhquCbyUFMRoFjDPdG lSg== X-Google-Smtp-Source: AGHT+IGO/d7KP6nJwj/5X1Bl/23roKwNny3F3/SdCo6a/vda+g/68kT+5jbEnMfYJNiY3Hot1V2n9uarLr4= X-Received: from pjbdj16.prod.google.com ([2002:a17:90a:d2d0:b0:2ee:4826:cae3]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:518b:b0:2ee:b2fe:eeee with SMTP id 98e67ed59e1d1-2f548eba7d0mr20657931a91.15.1736569593696; Fri, 10 Jan 2025 20:26:33 -0800 (PST) Date: Fri, 10 Jan 2025 20:25:59 -0800 In-Reply-To: <20250111042604.3230628-1-surenb@google.com> Mime-Version: 1.0 References: <20250111042604.3230628-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250111042604.3230628-13-surenb@google.com> Subject: [PATCH v9 12/17] mm: move lesser used vma_area_struct members into the last cacheline From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, david.laight.linux@gmail.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Queue-Id: DB88C18000A X-Stat-Signature: w5u1s184rfcaacs4kd9wm9hhnau3gcr4 X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1736569594-389909 X-HE-Meta: U2FsdGVkX1/CwKc8toSUCmeDmD4KRE8S5gH0Z4setPtK6i3fsvhEljzq+VF14mpI71H/o/32HclHqBdmLptkr/83c0xLXbJwZZpnGox1Pk3k+h91Q7tcntAfGZkrNRh/t6B5aF6nr9Gvp/r9SVT3+jFOVV5G+f0JLuvhiKevYTrI1TNYs97BgiGJ742QDtyDYo4S9Lvvpq2lh57dClMwn/y6NyNuNTrDP0iE8IU7numlTfoGR5KRUPEVghopj+yveEfrTGrUxjuqQ+9InKmuQotKTdqEspwi92DAjbG8r1pDVaYZW4hIFQDFIvPHhaEPlQF5hiwvf/2JitaUn1fETTYUSmYY4wkX3o+YBIfX+l4S+w5miE2e5mo/VEIrMA6UtegG7iD+Z0mYO5s7qNbrR230xvZrVh0mw/IAPiylHcBEvf/JTw2pqtv1LWQUOnE40ZGUKgHqj2ACQIxs2lDWXYqulJ0Cp9mKlNp5kfPYli17kqp7SbDMq+6Od3j1rh43yqZwsv3qp13VuFC1iYuW6gqmc8vmzuUlBOW5C4GelBJeysbyDl0/R3dPSfNWUQaI12Zi/ZR5ehZ6rRUFuU72n3P547hZPzCsm7S7/jVF9oddfnwe60x5CO9Ro0gilEi8fu1JB2IIztrr8ch7CC+Eu1N8lLKfNvmcc20YBh/XXuIU/8JZ/GOtdR67AeFMylOr5izzwjjqfs8zg8h6Kspw1cZ/B4cMhmJYeHTIWk5biCM3dzv+hMHoblVYC0suBYGu9nxyqOeoEpbXPpD3sJlJz4r/mpvzpfXGvQUzVvOvaMGvhPwzsRMHFwlTA2W4r0KtaAxKadGroNBc8oWTv/oSt1Jt+f7ZpPoKd6azy6AlenCDZHAxWjxHc0n+gWfMRQhBH2Aq6KyrK2dDlFJJ6zybeq2GMhhWnDB9U6Obse+KiZrznF0osGfapouNd/zgaX4d3AzyvdBJiz9WavHT3Yy c75rZMaQ ZtMkK3JR3zxgsQKPoTZVEpaCx9fRJOpCHivKxFBp4E3LU/sBQCn2D7bsb5gMsKXl/m2gNfUhpr9vAGeeE+yVyhoUifFMdg/dNoqXzfOVLRjIlV31hjaJ4EPKI9tCgHk0ITREJsDZGnNLA9iXyyEzUDK6AT8EISAFcRTD0FwhmuOVmHXJ16U7H5rwmydHiNF3dW2SQ54E+2ZbMgorH6UhffHF82GzlJG7Q/s/jDbRW7XDLVt0Jj9oIC4wwn6UEwhJWhj83J1MM2+lm9e3+Lev0aWvFMtrlnO/aiXbUFtEZgGa0KtCNzD6eHA/bEWEzkf7kOW22zwkgwyvllZgzcE6u0D2DYgFF9L/GTv0KsYpfbOsoedyEgxVNAHveIJy7eGdbaNdzvmE+qmjVnmzHsnf52iU/+9d0bVWlMd5DhQDm3wDgvlon6HVKBb2OCZ8PrpkyjRJMSiOOHJanHPU/SIQdax3o2VrU/YHlmFtiAkNHl4tA3VNBO7AxCJcdiT2t2jiq/M4Npn9NAhXSYfvfuOOhDSi5nls3M3szbhRyUUU2UNXaaTNS40E5d5zlzs8gcnLOo1KDvy6U4gW4JVQiZxrhRyAqsfz2jcuBueSJU+s5MQCb2LqKqip1W8yAWo/Q4q1UDHLQv/cZzS0feHohXCLU9FyzTgon/Oqa26rg X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Move several vma_area_struct members which are rarely or never used during page fault handling into the last cacheline to better pack vm_area_struct. As a result vm_area_struct will fit into 3 as opposed to 4 cachelines. New typical vm_area_struct layout: struct vm_area_struct { union { struct { long unsigned int vm_start; /* 0 8 */ long unsigned int vm_end; /* 8 8 */ }; /* 0 16 */ freeptr_t vm_freeptr; /* 0 8 */ }; /* 0 16 */ struct mm_struct * vm_mm; /* 16 8 */ pgprot_t vm_page_prot; /* 24 8 */ union { const vm_flags_t vm_flags; /* 32 8 */ vm_flags_t __vm_flags; /* 32 8 */ }; /* 32 8 */ unsigned int vm_lock_seq; /* 40 4 */ /* XXX 4 bytes hole, try to pack */ struct list_head anon_vma_chain; /* 48 16 */ /* --- cacheline 1 boundary (64 bytes) --- */ struct anon_vma * anon_vma; /* 64 8 */ const struct vm_operations_struct * vm_ops; /* 72 8 */ long unsigned int vm_pgoff; /* 80 8 */ struct file * vm_file; /* 88 8 */ void * vm_private_data; /* 96 8 */ atomic_long_t swap_readahead_info; /* 104 8 */ struct mempolicy * vm_policy; /* 112 8 */ struct vma_numab_state * numab_state; /* 120 8 */ /* --- cacheline 2 boundary (128 bytes) --- */ refcount_t vm_refcnt (__aligned__(64)); /* 128 4 */ /* XXX 4 bytes hole, try to pack */ struct { struct rb_node rb (__aligned__(8)); /* 136 24 */ long unsigned int rb_subtree_last; /* 160 8 */ } __attribute__((__aligned__(8))) shared; /* 136 32 */ struct anon_vma_name * anon_name; /* 168 8 */ struct vm_userfaultfd_ctx vm_userfaultfd_ctx; /* 176 8 */ /* size: 192, cachelines: 3, members: 18 */ /* sum members: 176, holes: 2, sum holes: 8 */ /* padding: 8 */ /* forced alignments: 2, forced holes: 1, sum forced holes: 4 */ } __attribute__((__aligned__(64))); Memory consumption per 1000 VMAs becomes 48 pages: slabinfo after vm_area_struct changes: ... : ... vm_area_struct ... 192 42 2 : ... Signed-off-by: Suren Baghdasaryan --- include/linux/mm_types.h | 38 ++++++++++++++++++-------------------- 1 file changed, 18 insertions(+), 20 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 9228d19662c6..d902e6730654 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -725,17 +725,6 @@ struct vm_area_struct { */ unsigned int vm_lock_seq; #endif - - /* - * For areas with an address space and backing store, - * linkage into the address_space->i_mmap interval tree. - * - */ - struct { - struct rb_node rb; - unsigned long rb_subtree_last; - } shared; - /* * A file's MAP_PRIVATE vma can be in both i_mmap tree and anon_vma * list, after a COW of one of the file pages. A MAP_SHARED vma @@ -755,14 +744,6 @@ struct vm_area_struct { struct file * vm_file; /* File we map to (can be NULL). */ void * vm_private_data; /* was vm_pte (shared mem) */ -#ifdef CONFIG_ANON_VMA_NAME - /* - * For private and shared anonymous mappings, a pointer to a null - * terminated string containing the name given to the vma, or NULL if - * unnamed. Serialized by mmap_lock. Use anon_vma_name to access. - */ - struct anon_vma_name *anon_name; -#endif #ifdef CONFIG_SWAP atomic_long_t swap_readahead_info; #endif @@ -775,7 +756,6 @@ struct vm_area_struct { #ifdef CONFIG_NUMA_BALANCING struct vma_numab_state *numab_state; /* NUMA Balancing state */ #endif - struct vm_userfaultfd_ctx vm_userfaultfd_ctx; #ifdef CONFIG_PER_VMA_LOCK /* Unstable RCU readers are allowed to read this. */ refcount_t vm_refcnt ____cacheline_aligned_in_smp; @@ -783,6 +763,24 @@ struct vm_area_struct { struct lockdep_map vmlock_dep_map; #endif #endif + /* + * For areas with an address space and backing store, + * linkage into the address_space->i_mmap interval tree. + * + */ + struct { + struct rb_node rb; + unsigned long rb_subtree_last; + } shared; +#ifdef CONFIG_ANON_VMA_NAME + /* + * For private and shared anonymous mappings, a pointer to a null + * terminated string containing the name given to the vma, or NULL if + * unnamed. Serialized by mmap_lock. Use anon_vma_name to access. + */ + struct anon_vma_name *anon_name; +#endif + struct vm_userfaultfd_ctx vm_userfaultfd_ctx; } __randomize_layout; #ifdef CONFIG_NUMA From patchwork Sat Jan 11 04:26:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13935779 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B9CAE7719A for ; Sat, 11 Jan 2025 04:26:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3BD036B00A5; Fri, 10 Jan 2025 23:26:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 36A006B00A7; Fri, 10 Jan 2025 23:26:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 195E36B00A8; Fri, 10 Jan 2025 23:26:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E22CC6B00A5 for ; Fri, 10 Jan 2025 23:26:38 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 9CD21B0736 for ; Sat, 11 Jan 2025 04:26:38 +0000 (UTC) X-FDA: 82993884876.24.B366FE5 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf19.hostedemail.com (Postfix) with ESMTP id CADE21A0005 for ; Sat, 11 Jan 2025 04:26:36 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=uSRcEtuW; spf=pass (imf19.hostedemail.com: domain of 3-_KBZwYKCCwacZMVJOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3-_KBZwYKCCwacZMVJOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736569596; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hRZ4wMzIwoFaVH+QKckb3zqa5H5+gDzhM3t6myutkJ4=; b=n64xoy2DhQDx3Y9xw7pFNE9ZPRlVcI3kwXTpRpU2pOQo8Ow5gPXe47goK0UNKBNZgVDu3V yFK+oevsy3EYJ9hUlO4TGhkXBQf1g6COr/U1g31wuPpDKAaMecjv0fpAEbJ4gKLXNF13H+ 7b4GJqor+27YqaNHQRZgC9R0c6QWNGk= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=uSRcEtuW; spf=pass (imf19.hostedemail.com: domain of 3-_KBZwYKCCwacZMVJOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--surenb.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3-_KBZwYKCCwacZMVJOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736569596; a=rsa-sha256; cv=none; b=dU8k/njFg3Ci+SR7Qu82EZXxU86PoWkhnypZORr+m98Spscw0JVkQFJPGyzH0hqROJYmDM SkpGMPC1O2I6hXf7WNEZklSRWgFCQv6Hs7xT3654Hal+Mx6B/d3lBr2lnin4a5q44Bq8zD KEqY7gg4oJMIO0ydqBM50D2maLcXxUA= Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ee46799961so6971965a91.2 for ; Fri, 10 Jan 2025 20:26:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736569595; x=1737174395; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hRZ4wMzIwoFaVH+QKckb3zqa5H5+gDzhM3t6myutkJ4=; b=uSRcEtuWzvth5Cdj1XTIRi1HYyr/jzpDr5ERbv7J0F8288/emvBrbJ74FCDpYeTUdt lSmEulqX235V88LdDIKDa2MMl1MclaG/EtCw0E72p8cnKWqqiWU4lkEWGS6+USr5eK7M NkvqTe/8pbV//qi4Z0A9Eu2V81CPqR2qFtNCsS/GIZHhigLcJwnyhbHllZqgsJW1apYU nXYAxupaIGPn3LcKEhQ8YS4HNeZhOohXTfuakLjCNxL2GaHO1KlTYue93xopQ5c1La/H 9LvGOYwlLGVXee1Zf7xal5vudpEjLh+0HkL33KkMd0CiIyJYTqFxzBBDftELbHwJnsf5 WERw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736569595; x=1737174395; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hRZ4wMzIwoFaVH+QKckb3zqa5H5+gDzhM3t6myutkJ4=; b=kH63LMy4woq3VpY3BVTNEYO2/WAQYfMgQF7nAz8T9JrZXjn1Y6eFT+NgpSqZUnF50D tQMJpNznMwi3OFra/jBAos5Ozs8sXtMElgYONez1lTbcjaBJvXlvr8mo+M5N4BLXVxSJ KUqwa1D5+5rhaQWjyKQkcb9Advunj49I6/g7lJSy7cHCnW3Zt4xYbntZe4M1z4ierbhJ sLeuz9xDzcZdAFwviw86Yv9aoyu2RhP5xsK38d3ETeq3RyQssMGRu9MYxIqbq+mbMf1Y OGJrK6U7kt5iJiqAt7XnYKRVCUvv0zExmhzKC/hYAOfEVwMdEK1kEEFf70FYEqYr5Bdw 8vJw== X-Forwarded-Encrypted: i=1; AJvYcCU1Cu5EAE6KvbfbmuiepwbY1eWCy2VCuiTl7E/P8tupc1Fmg3JaR8Nw7sjen2MW4aooMQGFkTpuSw==@kvack.org X-Gm-Message-State: AOJu0Yywy+p7t947CUf+DDQ9c0UPbo/Ro2m1Mfo/UrobU6TpXeWWc0k/ DR1j3Ql31nQMnH0iAJ6w8wTREBAxltvrhnuRudf7Bi8jfXU1w4J1i94IJzugVV2QBnqF+9Rs1Ct FZg== X-Google-Smtp-Source: AGHT+IFTyAfsIAx+W8ijjylMHsN3oJ/EjAkXBdrJzAHIWy/euYIgMSi4UM4+efBYB633hSURtx6BkjgnB9k= X-Received: from pjbtb14.prod.google.com ([2002:a17:90b:53ce:b0:2ef:7352:9e97]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:548f:b0:2ee:7870:8835 with SMTP id 98e67ed59e1d1-2f548f80206mr20438054a91.33.1736569595602; Fri, 10 Jan 2025 20:26:35 -0800 (PST) Date: Fri, 10 Jan 2025 20:26:00 -0800 In-Reply-To: <20250111042604.3230628-1-surenb@google.com> Mime-Version: 1.0 References: <20250111042604.3230628-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250111042604.3230628-14-surenb@google.com> Subject: [PATCH v9 13/17] mm/debug: print vm_refcnt state when dumping the vma From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, david.laight.linux@gmail.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: CADE21A0005 X-Rspam-User: X-Stat-Signature: 4efbrq3794nf41ashnpnkzym78aw3a5e X-HE-Tag: 1736569596-149070 X-HE-Meta: U2FsdGVkX1+vnWI3B6s35gMzraQsXZOnSrnM40rajBzB/0cZgpprtbNrUzZOkIPyGZIMdso7iytvxDfad+i9N9f9alSgbyVdQaNSR952jko+z5YIjypTG2+NBNx+q4338YTEsawsq38CnqEyrHM+Y2ixTbFI1YvyM+ZrWB45baGRfTZMv7eP0r2dI3JW8PLKyPg5JgzaKOpK/HVAZ06T5iXTkE/SET4LaKGmRHOutGvLZfhy3ZZIvDZpceNgewIzLxhmT7a+8uIcD8/J6w+nH22rLWbDNpcNTJYL84IroEck1fXJvdioHB3CBKhTIVIGB+VIq2RCXw6yRZbu0A4xXvi7hjC4JegMHoWh6TOgHHYAfBNpUINi5Gc+5SCib4vgh1yPP7aX6GPU3s4BA01c965wsF9rCzHQSkxU+KKtpj/oxAli9LdtgRs5AzJ/lvoPX4OtpXXzUQyR2IVG92Cxm05zMwWz1tnr2FpxsB1MXC6HqLe/tGT48qOyJWrZ/ie8OnHfgpeDE/M4ii5IIZhRxjYqOnGeKOpJiZtynPuIbBMD9dGp22PXHv0j5Gqx7+WZcVNuojlvT9Kqhi5pTd92vumZ5SRRvByvu8oaaDbeqaGVNwHMi11zZ6FqB+Jwq2PCI3YSfLvFDA6hEystV9TNFIdc3C0wj/DSOUMV6knvS4LpogiMcWQfAzIuZHiABvbULjpWPkzGcvcL8ceYKKDDiOV5OSS+jVu8LoU+5TOHbB3UHrFi5XGNwp/ZjqoKvm6wz5dSLnk0z6PfSWulfPZ1gTeYL1T6eWZq+kfEnqnABkO/DzPYbDMEqi0dcdzRwNAtc/oWDlXY7RrZxwTreA9Oz0x9PT9k51IT5VgOUPyJHY6uGWPjCrydoYloBiMW9E3YC0rvoM0uw712M+MhQMYTuQkRgjVc3lbc10aDipJLWRjjQmo/+tvKBgDeeuZ+ek8xWu4kyYIhJYtgMP/IFZf vGRmSbFG 80frHdHel9elFe/jsGnwd8Uok7JvHYAlUDQwrYU4a41S2nG3RmkWz1iwCrztS6J31yJVitsnTeIZTE9jPqscuypx91IdZIHREfXyiW/50qecuu6TsKedUk09HLl1ytjB9rK62FiLU2/IQvm74CbfnwnIrz6OvTJIfcpSICyUiBe4GY2IV+qUFXmfaEY8gWbYo9n4xDbkt8Op/0o/Pa/4FN4jrMSGciGURNFybp0C5sZk+c12L/tGj05S6kicZMJYPpBgWHM9oushofutv73TaFaGc2MfrrKcNc5+CbYo+RctGbZ9JWVJrDhA2mnI3U6UcYwoWA5O1B2edJES3LAYS6IOAD/fHewkc6v/CyjS/XpS3OtxM/N86UuatfEoMUK/JuKhKJTwgHD5ZZSc+hnpOLSbN/NPxRUgZFWg92hcVDc5A2eL4XzkThb1WmqrJ8a4FbexDnv6GSCNG2tqYaLmKdXtph6n4mp0WT73VZ0FeM4uoq4e6BbdoeMvOhV3HLuTIrea9as4lKBDFGsezbwkkQlhmUE3rHuFlid85odaOLZLmhURe96pVNONMFOW/K7pdb6oTZvIiL0oRFGP+lAhdbttlARJcrO11k0qPoHz6pAwYBpO44rnfpmWUISKgEhBJR6H5M90oA5bXxOsSZ1Vr2Qg7cejQz04CTHQw X-Bogosity: Ham, tests=bogofilter, spamicity=0.000002, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: vm_refcnt encodes a number of useful states: - whether vma is attached or detached - the number of current vma readers - presence of a vma writer Let's include it in the vma dump. Signed-off-by: Suren Baghdasaryan Acked-by: Vlastimil Babka --- mm/debug.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/mm/debug.c b/mm/debug.c index 8d2acf432385..325d7bf22038 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -178,6 +178,17 @@ EXPORT_SYMBOL(dump_page); void dump_vma(const struct vm_area_struct *vma) { +#ifdef CONFIG_PER_VMA_LOCK + pr_emerg("vma %px start %px end %px mm %px\n" + "prot %lx anon_vma %px vm_ops %px\n" + "pgoff %lx file %px private_data %px\n" + "flags: %#lx(%pGv) refcnt %x\n", + vma, (void *)vma->vm_start, (void *)vma->vm_end, vma->vm_mm, + (unsigned long)pgprot_val(vma->vm_page_prot), + vma->anon_vma, vma->vm_ops, vma->vm_pgoff, + vma->vm_file, vma->vm_private_data, + vma->vm_flags, &vma->vm_flags, refcount_read(&vma->vm_refcnt)); +#else pr_emerg("vma %px start %px end %px mm %px\n" "prot %lx anon_vma %px vm_ops %px\n" "pgoff %lx file %px private_data %px\n" @@ -187,6 +198,7 @@ void dump_vma(const struct vm_area_struct *vma) vma->anon_vma, vma->vm_ops, vma->vm_pgoff, vma->vm_file, vma->vm_private_data, vma->vm_flags, &vma->vm_flags); +#endif } EXPORT_SYMBOL(dump_vma); From patchwork Sat Jan 11 04:26:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13935780 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9B71E7719C for ; Sat, 11 Jan 2025 04:26:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1990E6B00A7; Fri, 10 Jan 2025 23:26:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 122456B00A8; Fri, 10 Jan 2025 23:26:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB7076B00A9; Fri, 10 Jan 2025 23:26:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id C8F4B6B00A7 for ; Fri, 10 Jan 2025 23:26:40 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 7EF62456F3 for ; Sat, 11 Jan 2025 04:26:40 +0000 (UTC) X-FDA: 82993884960.14.AE18E51 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf06.hostedemail.com (Postfix) with ESMTP id A580218000A for ; Sat, 11 Jan 2025 04:26:38 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=CuzOG2hF; spf=pass (imf06.hostedemail.com: domain of 3_fKBZwYKCC4cebOXLQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3_fKBZwYKCC4cebOXLQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736569598; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rrfyWeUIr/I0BTq+qLIV8uWpeFqy3I6BsfgqMyH93Tk=; b=hOe4/3ktL6r5VD+bP//WiTavdlQy0wxUIrp3XVqUCsf2pJ9irLKVQwaE8+/ZOEv0ddXIb3 BCQuP3XmFESvBOri3spwcD1AZg9lftBtc+G/M90DqJ7mbH6u2ZsF3eAOwOUHoXcj2f+99Y sYr6h4oHKZj4F8uKWzWVnOvr3zZAVzM= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=CuzOG2hF; spf=pass (imf06.hostedemail.com: domain of 3_fKBZwYKCC4cebOXLQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3_fKBZwYKCC4cebOXLQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736569598; a=rsa-sha256; cv=none; b=g75+mprLvs3z2o1xUKxiuGmb7aAgjHUw4LgK+9s7X67yS3DweSxAE2s6CCt1+5hlu0BDGf F75hDjPmw8iOXjQZQBLhmwERPQyy+6N0LTGJMo3We5ppFzE6Wcm0NdNN4XqaE8+qXzFtnY +4EEwPrl4cw2t945wCz+T1TbAP5/Np8= Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ef9e4c5343so6934646a91.0 for ; Fri, 10 Jan 2025 20:26:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736569597; x=1737174397; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rrfyWeUIr/I0BTq+qLIV8uWpeFqy3I6BsfgqMyH93Tk=; b=CuzOG2hFNIEjqxs9mDEPta6QTzXh3n+Y7j3ar1rMFYpVxC0baPm8vior+PfTscYS0y 8TjGMj6TDK5OsFmIWyo2M/gf3bVr1EB9a9So7VCtms05O8gBwWwgraLUrLnc6XTKlNLY Z1gt2iZ8iTUauCyU9CkSmTXLGosphvl+fHpWPxGZHaHTRGl3LWcKK0fuPApbTsCF1Dwn WNVXPnsttpexLm36b5epPNoB66mjl/aLFi7QqDXMAAuqMEGoM3a7M3oLIGdvOifxttyG 3qq+YsDWVL2USeH95eFnB/CXu8pjL+cRCfd7QctBJ3SyooFKJEJWiEA4rlhueKgz/iUE U9BA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736569597; x=1737174397; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rrfyWeUIr/I0BTq+qLIV8uWpeFqy3I6BsfgqMyH93Tk=; b=oWrs8bilGBN7U3e3y5SGGeaqwSlB4wKa6qFJai8oydj9S1CpEbtu3k/wqwG24d0c8R /PL+ELWRAHVnTVu5YaCZWF0evR2273h71ztiaUUmW9uAUcFm+a/SGB1ECs/3HKlSYtpX iT1QkfN4gA4r9O347FgxX9jBBPdPU/rA81u4ALeoKm55PQ04m5tt/zmrWxIzwtxgT99r p7KII4PRKQkdCL7asaYOfRKUuYUinCptz3sgLPL7ThxnVcM8m4CbwthlMAhdRpFFQVmV P54Y9jaHS9/doWcJzGx5x1zaU1MUiinZbmby1qxeB9FtcyAEL40oDdSzyPUb1glBPYjX YdYQ== X-Forwarded-Encrypted: i=1; AJvYcCXwwkORTye5Ryog+t0BCjHafs/M5nrERywjO0aAwgchoSyxcETYYyJ4Zxhsv44sZrwa0qwmd7aHAw==@kvack.org X-Gm-Message-State: AOJu0Yw1QMpJPvFeW5r+V2w9w52f24VoXgF9MZzxs46XJPZobYH2I20L n93DllqNMi2MaJ2Oxcttw5/2FW/dDePqRlR8cYfAQ7r3TvM3j9bXCnxbYgeVUWnPWzl/YZrFpMr 0Qg== X-Google-Smtp-Source: AGHT+IF0cO5xMxR+CmFlng0UtVrY7zpuViy4m8fCuGIIjG6w6sySt0yQzGg7F/vXZQnVto/CKon38/Om3JE= X-Received: from pjz13.prod.google.com ([2002:a17:90b:56cd:b0:2ef:7af4:5e8e]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3a10:b0:2ee:f1e3:fd21 with SMTP id 98e67ed59e1d1-2f5490c20f3mr19186046a91.25.1736569597473; Fri, 10 Jan 2025 20:26:37 -0800 (PST) Date: Fri, 10 Jan 2025 20:26:01 -0800 In-Reply-To: <20250111042604.3230628-1-surenb@google.com> Mime-Version: 1.0 References: <20250111042604.3230628-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250111042604.3230628-15-surenb@google.com> Subject: [PATCH v9 14/17] mm: remove extra vma_numab_state_init() call From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, david.laight.linux@gmail.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: A580218000A X-Rspam-User: X-Stat-Signature: ujrrw8txfwkqzm5kbx47mzy38t8mpshf X-HE-Tag: 1736569598-371735 X-HE-Meta: U2FsdGVkX19jjOxcesDod6lyAS4cfDQboWu7slLATbKFLzXwryTqEoH3p9ZsaCqY1BYx8a7XkSlOI+6gYYLxIc+O5BsKHCnVc2OJW12rjs0TUX1riVM4SCOeyvvg3QxEvTxfQMcMQ2eQYrXRvtekSnrAhRGyHDZApwV4yhYwUGG5NhF1FTidJS7aPk7Qu5zfOKEEB35kNRouvED8q3ftraPxfa7JUD2n5E4N+RJ0hz5Sn4AP5vtav39OnonqXFcnpYnexKfLiykrHYJla711dMHuTjTFCnJFuXQBVVGeFjYkf4ZziKEA3MtX3cKr8zHHt8jlhJ0/INP8mO9Ydtm0Z7NHUn+tBZYEsmjAOdleD2orOpvoASdQ3rD+sji/vycIMDcFhZrMWvpavV47GVZYRph2Zr1de2Esap7Dq/9INnLShYWUSDLmpNXBHnBxL/Z10AKzcnn+peRTv1pNMm5j3mgR+/5aY78H5Ii+MbCVjZULr9BC+Px3AAiSmJC5W8fsbjZYS6aS3ndU28bzS7ZGjUYg3OQ0cspx26bkJ30FYVqg2BS0PKegM14v7qupq0hMxZl1xIG8hlmJFJxmJd72trcWxAGYfsbLL8V5JoZGTm1EdiEnIFrmtmY8PRtcNHuiTpHFrkexn3RtyfaU5h8LcimC4eGckGnatU9vUIDsnO1/QHoke3cBFpCH/0EreWp1piw4BuXSqM23UE2MAFg2B113+uNi++vFIKslBCphkdi+ElXGQzQBnaVls4aVJKVzFP8w2l3BidnNq90Inbp+a04+eLPrlO13mBlorjpYwlL9p4e2ow0XgpyVsEUDiIFZfvoxizSL5Y+E177AkG6TQFrkWtJ6P2QoyK6muqm7ylsLK8sVlI2JlXpHSP8ZxExKjxsZzdpsdq0sJdXmSqLf0WPWpaczSgKSWaC+fc9zmdgVVNGbYCqOGNyALXDTxeCMRIKu7tqdnO/rIBgVK7V i9/HHO7A F9vUvUPsou9sPFhlxDH+r4dQYWsOHxvajRF6gaeeHzvtjUlax6WVf2b+LrqMq3wNixUxpgtSQHkLAfeaacItkRyeBQm04GqHmjx2Ns/Wsw9p/X9/55wTsX7g7xtPTujekuV9G7UItIv2xz0/+hPSmf3xqLK/+pO5XnDHgmkd/BpdEvuI7oPcL6FJORX6+eGGK7QOktPXn7FQBtTcAwwaKKEzFyCb87aGHtTiZqzJvJtzUJ5Ab8V9ksnUFy/LWf/2LeIuOjNpqZAVQZLpTqEbbRHPcMM+QSXY3bqKbqgm3xuaLrL/5XDG4vmHUVwRNOeMxlelidkk58cg6X4dhIWhbiXOnlwhbdel7YL9QcP2iXUx7k3wJ77BRoHfUZ0R6EolixtXOS7DslGWeOqzEfYtmb8h2YL31/osmW+8ddybEyNVfCQhg70MCQ3K2qMgUlvf7Zqk2H51NQ7SmYjPGwhbgZbdavsd/LeBj2Br0mt2AwVUuaTz8P+At28HRD4QOPJ1zBC4ykgEG43jCAhoz9Fw55IzbwidHGrpP/+4eiXmbYaZjD+UsDLxlVgmvJ5sePVpyj3GRYzZC7LQxdNIyyKqKbbU35eEtemtl5BIFusTxVtRKPjg9fTnjGxCjeFZPC9h4Q2FaUeb6MogevvHdGp8ssEG9CqewsPtMEf93 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000021, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: vma_init() already memset's the whole vm_area_struct to 0, so there is no need to an additional vma_numab_state_init(). Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 1 - 1 file changed, 1 deletion(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a99b11ee1f66..c8da64b114d1 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -948,7 +948,6 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_mm = mm; vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); - vma_numab_state_init(vma); vma_lock_init(vma, false); } From patchwork Sat Jan 11 04:26:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13935781 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 125AAE7719A for ; Sat, 11 Jan 2025 04:26:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 394096B00A8; Fri, 10 Jan 2025 23:26:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 342E36B00A9; Fri, 10 Jan 2025 23:26:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1959C6B00AA; Fri, 10 Jan 2025 23:26:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id ED54C6B00A8 for ; Fri, 10 Jan 2025 23:26:42 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id A40CC1210AC for ; Sat, 11 Jan 2025 04:26:42 +0000 (UTC) X-FDA: 82993885044.24.89134C1 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf14.hostedemail.com (Postfix) with ESMTP id CEF07100002 for ; Sat, 11 Jan 2025 04:26:40 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=WT37ABmU; spf=pass (imf14.hostedemail.com: domain of 3__KBZwYKCDAegdQZNSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3__KBZwYKCDAegdQZNSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736569600; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=sqweZn90u3nkindBXw7O0ID50FUbfbOsDRIP3ctel0c=; b=5Qw2zOPMsTfB3L0KD8T9EkK3pu0gyEjUvKqR7KEVZVyUHcs6JCCwmafDZRVdNKdjZTIccm +xmkFj7J+Tqi427CAxMbDgvPio3JyKXvhfNgZ6wUYzHB8yGBS1KdgvIydkXzGfCf1T+8if AjV/rfGyp9x20upfBq1iu6Tcu3awKcs= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=WT37ABmU; spf=pass (imf14.hostedemail.com: domain of 3__KBZwYKCDAegdQZNSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3__KBZwYKCDAegdQZNSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736569600; a=rsa-sha256; cv=none; b=vQHL9m4FW99azlqXL7fPqo+ULGI0bEN7JEEHQOfP7b+7yZH+wiTf5h+23s1y9ojdQoiuto 1bWwxgZD0VsU4WeB6oAixlJ2x1xYwIXxa2HhmixstdqyFcPZGuOGm7OIp/sb21o9SAmoBH wUK172Hca58z9kjVMOXstNs9OtD7RzU= Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-216387ddda8so52050045ad.3 for ; Fri, 10 Jan 2025 20:26:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736569600; x=1737174400; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sqweZn90u3nkindBXw7O0ID50FUbfbOsDRIP3ctel0c=; b=WT37ABmUID1zJ6zDdDmFtfAGIE/Ey7pY7hj6j1/zjZEStyZnilrgF8CD0nNCpZ4c7y AxV4jPJDYlRhuhAXdBQGeGPmaA8PkPuMhZxUgUjAtp3qMU2WjPNGpAeBdN6Io4mSYaEX HO9gZbS8jJixq1L8ouhSTRxOwQCh89OmzL02pYYUI2sxGvIrtg0XgNeKBi5kvR6ysUy6 HKrPpMUHSql1ddNlwjaL3xBPxYdWxGIwBYmNkNyf+7M/f4scseYwnHse5xjIwtVGTs0C Fi7yh0OvVzM9wU5Yr+kyfCKPFyHmo81CuESkaJMBOTcQq/Jo00s9Jh7rAliqm+dCkC3J dmdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736569600; x=1737174400; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sqweZn90u3nkindBXw7O0ID50FUbfbOsDRIP3ctel0c=; b=KE4G360AbWLjkCXuxN8i9s0cvq6PFOGB/sk+j5q9d7PYFMK/LaP/JBQV2UoxIbUngg NMUZamtBgq7AZyuaOb1lBAvlX+KPa2y2Ivi5RCHITBqwwQQ20KwbIMsD7ec9+p7fi3p+ YWiNuewE4pQcVzQovvNifk1hLiBHNrZmihFPNqr+qf/6PSnhuy1Ze+zn2i/U2lS4391h 3cc6qX3BJJAkCOgne/jj0lsD9AML+EyHJLeLYXHu/I6g5nkt1tRhj7inCQE3Vc3BMPk8 OEHqQstvc9TZSCI25yIGCRC4YZic9TCDN+/4t8/HOfNWQHFt0juj1tgqyuHGXZRdpdOE TQVA== X-Forwarded-Encrypted: i=1; AJvYcCUy51abZhCKmvo1vgabtJ5kPPxBZuafFtJxxmvvJSeN6qFamL6RFlFp+1sILJK8omuifyZ23kUNgg==@kvack.org X-Gm-Message-State: AOJu0Yxq28TyS7DNOowq8bSHN3IkTI4mgvPRoTsPxQmSj8vtbuqABaXL UYiGvXmIQ/ty1bP1cie/YyHwgoq/9VDR4nvDvZFXrxS81LfcCSZ0cHf8isu/qMq0CuPFyOljOxN ZKg== X-Google-Smtp-Source: AGHT+IEDi9vLxcvUpqCWBPMXXF0Y8lrMfIatGu39C3DhS1ZkGtjS5dUciQm8YtZJQiW79lEU0cSQ1XI24iE= X-Received: from pgg14.prod.google.com ([2002:a05:6a02:4d8e:b0:7ff:d6:4f07]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:c996:b0:1e3:e680:8c91 with SMTP id adf61e73a8af0-1e88d2d5ea9mr24704039637.31.1736569599645; Fri, 10 Jan 2025 20:26:39 -0800 (PST) Date: Fri, 10 Jan 2025 20:26:02 -0800 In-Reply-To: <20250111042604.3230628-1-surenb@google.com> Mime-Version: 1.0 References: <20250111042604.3230628-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250111042604.3230628-16-surenb@google.com> Subject: [PATCH v9 15/17] mm: prepare lock_vma_under_rcu() for vma reuse possibility From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, david.laight.linux@gmail.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Queue-Id: CEF07100002 X-Stat-Signature: ancnfb3bwyqyj85gnukmzieapgwzwenp X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1736569600-701351 X-HE-Meta: U2FsdGVkX19WcPrPOtniMuI0iTR9v1YrVFkaikYjF7lOf3d7flt1+xCh98XyafqN4vrDBCz2d4101ZnWJc8q+htDS57ePg5O3ItEVdIV0FpUkAw7uSLe8FqUkJgd7hD1lH1SulWs5/FWuy1d4IzqwdeSTyLm2MeNNsYUE3Re6W52h0Mgi5Vnhb+Bdbdr8BEtY8sWVuf43lPMGcB1w1LPDL4o/tLGbfEpZXE7+oChTMu1e1rTbYFn4R8/uq1PrelHhYI/kDz2CiylgfbdUsc4hHeQ3PiT1aG+FYd90uPaQ5Cb5JmDEM+grECt0PQufc3/WhJyjcyRv0yWrQn3D37v8PSqYT2rJIN9/OOqMS0ZLvCmhHJ62h6oUnFAyrJa1AxDtjhpxBKUrpBk3U9xL96TN1kjbuW8UoQjF8+/HdBqBl4+327Fp1rQZiTY/qk7rRsaSTpPnjX8hiTSaDRLQVd2mxBo7wgnRyF2K+9AXdmzaREo0g8hosYFJd4r3G2iDMHb5oR8Evnxkd9nFeRymZfQlVkLf4UMDHwZdmMv4eglHhXSWaOTmgLYB4GRH+ICsI05T1cpBmjLXohj9M2jHHECSXDl+4crOkgnk/mcKgq4Tk/oVFYnpHpnNqX4D4M7nZN9s9cyrptH/z2yRo/zh085897468j934J52IxXIzUGX9ZZf460xSojKshwISTNQwutvZqNlef/yqh9YjEmXXGicw7PMyVNzRhRbktxZyLm5rPeORgg44dolZkViHp0dHaNLh1rihdAUxyUk0txa81ZgNABBAE66kkCtCoXPrKVYJBnkefp5JvEVvAIoPSLzMM1ccznacUA2PZXTmJLCyUo8X0CjYVta3FUO1fqLuyuqUEjl4bw34QJyyy6/WVMzlSI+ECkz67sfsr42c5MNW6Rdy6GYX07salcx3RJ2aVzC+/nWuN5BhsbM89RkGJFkNjZS2kuMdGIavrSQAPdV1H 2WllbebE tasvaK3JjSfw19RMwujPIYoYmhyQoavcGf0d58taxpareep0ia+8Ynp/zRNDvEZvV/qJEtAkYbLEfCwtioaQ7stHwvcqgIyqRQnx3r2p9fDMdU/0MLK5V/b4zfYnfKQlZx5WPfFR4ztkxTlsmg+reziznXl/aDj7/FltctziEjKenXhle/Eyd8cmoVp+jh+3F/MdTKrgJjG1jjkl1IPh0NR5yQZYW8pE5Am92pTrGHIO8BIFAkrzblBubsVytVq28YAiiAlKyloBvbtGTjup5jc9nEA5pdbZXXymjTdBeTU3l6MR050tw+tdHkLbvZs2X5EB6b0XDG/0CR7i9ru9gUcSGei52BHmBcfiFxdyRIcZmzijVdtAr+B1eDB7eIPvDN3/jBLJZJDMVG+muJ7X3cd6KdyJBgTE/emwD8JnVznpojzuUU0TbTslMBu0LuMpk0zkNLDxY7zUUp1IndKHScuR/yTwhkFtCdwJMobbfLRd3XIiN4dPrracv03aBEOFTgm6sAuMSKPLtvWUYBOk+SakL6LkcfXZ7lh1PN7Si+qo2KfclPc+chHBMEd92RzcqSZc/s5mkIXsnMqgGwcr0kpnKdhCdcRF19+Qjt/04EJNqV6TiTYKeTWPxScGQDkuhbmH43CkM/WQh8MRqR6sUB6p9g13nKL1rItw+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Once we make vma cache SLAB_TYPESAFE_BY_RCU, it will be possible for a vma to be reused and attached to another mm after lock_vma_under_rcu() locks the vma. lock_vma_under_rcu() should ensure that vma_start_read() is using the original mm and after locking the vma it should ensure that vma->vm_mm has not changed from under us. Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 10 ++++++---- mm/memory.c | 7 ++++--- 2 files changed, 10 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index c8da64b114d1..cb29eb7360c5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -739,8 +739,10 @@ static inline void vma_refcount_put(struct vm_area_struct *vma) * Try to read-lock a vma. The function is allowed to occasionally yield false * locked result to avoid performance overhead, in which case we fall back to * using mmap_lock. The function should never yield false unlocked result. + * False locked result is possible if mm_lock_seq overflows or if vma gets + * reused and attached to a different mm before we lock it. */ -static inline bool vma_start_read(struct vm_area_struct *vma) +static inline bool vma_start_read(struct mm_struct *mm, struct vm_area_struct *vma) { int oldcnt; @@ -751,7 +753,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * we don't rely on for anything - the mm_lock_seq read against which we * need ordering is below. */ - if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence)) + if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(mm->mm_lock_seq.sequence)) return false; /* @@ -774,7 +776,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * after it has been unlocked. * This pairs with RELEASE semantics in vma_end_write_all(). */ - if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { + if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&mm->mm_lock_seq))) { vma_refcount_put(vma); return false; } @@ -906,7 +908,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, #else /* CONFIG_PER_VMA_LOCK */ static inline void vma_lock_init(struct vm_area_struct *vma, bool reset_refcnt) {} -static inline bool vma_start_read(struct vm_area_struct *vma) +static inline bool vma_start_read(struct mm_struct *mm, struct vm_area_struct *vma) { return false; } static inline void vma_end_read(struct vm_area_struct *vma) {} static inline void vma_start_write(struct vm_area_struct *vma) {} diff --git a/mm/memory.c b/mm/memory.c index dc16b67beefa..67cfcebb0f94 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6426,7 +6426,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, if (!vma) goto inval; - if (!vma_start_read(vma)) + if (!vma_start_read(mm, vma)) goto inval; /* @@ -6436,8 +6436,9 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, * fields are accessible for RCU readers. */ - /* Check since vm_start/vm_end might change before we lock the VMA */ - if (unlikely(address < vma->vm_start || address >= vma->vm_end)) + /* Check if the vma we locked is the right one. */ + if (unlikely(vma->vm_mm != mm || + address < vma->vm_start || address >= vma->vm_end)) goto inval_end_read; rcu_read_unlock(); From patchwork Sat Jan 11 04:26:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13935782 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82F59E7719D for ; Sat, 11 Jan 2025 04:26:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9AE296B00AA; Fri, 10 Jan 2025 23:26:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 93B616B00AB; Fri, 10 Jan 2025 23:26:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 78B596B00AC; Fri, 10 Jan 2025 23:26:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 53AD76B00AA for ; Fri, 10 Jan 2025 23:26:45 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id CF32C1C7D8D for ; Sat, 11 Jan 2025 04:26:44 +0000 (UTC) X-FDA: 82993885128.20.46106D7 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf20.hostedemail.com (Postfix) with ESMTP id 09DC51C000A for ; Sat, 11 Jan 2025 04:26:42 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=WbDTHtIe; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf20.hostedemail.com: domain of 3AfOBZwYKCDIgifSbPUccUZS.QcaZWbil-aaYjOQY.cfU@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3AfOBZwYKCDIgifSbPUccUZS.QcaZWbil-aaYjOQY.cfU@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736569603; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VRTmBS/RO6y0myDTT8TgSTc028MD502e4G83fytWfN8=; b=n0HEufEYimuHumayj80AO3WrDzz6Jui2ZKLUcyEYzmv6ZMvflGyn6DvjqG/94Qpeo8cIPD Hl15oKLIDArX/geYMvShM8EbTq+PZIZlzLkA5vrv+hc/ZDCtFN8TfwRhkf7oFk4bve0E5O i6wF4nIAPDyQeno5ryOXn0HFLNe3QZU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736569603; a=rsa-sha256; cv=none; b=VN7xcIXS1K0aof3XP1zwqYYPpOmajuVtfw+6OE2sIY2i63EsPmyvsJKXw5wcI2t7LIw9OB L8SYdAUBfgV+iMaLEuzliWNr0LIOsZItFo5XAFV/mfB9u5dh+y6QF0z/O/ksWEXwSGUNUb JDEOdco2peGDZrc9cUswIUdngfT7N7s= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=WbDTHtIe; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf20.hostedemail.com: domain of 3AfOBZwYKCDIgifSbPUccUZS.QcaZWbil-aaYjOQY.cfU@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3AfOBZwYKCDIgifSbPUccUZS.QcaZWbil-aaYjOQY.cfU@flex--surenb.bounces.google.com Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-218cf85639eso70606195ad.3 for ; Fri, 10 Jan 2025 20:26:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736569602; x=1737174402; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VRTmBS/RO6y0myDTT8TgSTc028MD502e4G83fytWfN8=; b=WbDTHtIeXYCDFjutm/lXYk3pJ8rMi5Vo50/Mpq2d1tIIDMxZqhy4Uo2su24ubFxJlm MjkLd93gzRO0ifZUkQLPeATpO9lEbqsqwWqT3CHSk6QzGPzpEoH30KLaqz1uEMTb4HcZ DEjZPHK9/qDDT12gKVkIsJPE45oia7AkHD5+ZAytxqCAG5Yn2CpyCkVAbgCd43w4Tun+ GG3xSC+TDGjcOcUE/l17VbUSCBOPJB/NxPDiTORvyxje91F8DNUTVuLpGaXIv5nCfDFk fvoO/y4cas11RI7lnKaMEJjFb9+FAl9I6bSjAa0BbBoTa4YSDGNJMsKrvDmJ4omEk6O7 RApw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736569602; x=1737174402; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VRTmBS/RO6y0myDTT8TgSTc028MD502e4G83fytWfN8=; b=XoKPljcZPLo8/vu2GjuTgbk+3Oyq4K4wKvAXOfqAjbXnNf6W+NE/fHYjUYN0/NpqjM TnHN9ESPIw6RyWRf3HhQbupWch5rBhWGVrv1eNjQIhQyP2p0rvqv3jncuGWyHpMzJChz oJBksJXA6ZC2FM2KQTyuybT2A9L/7aqCV6ufEMDzRSfRgCNPuaNLdVVwYop/255XvfSY z+iSqLZsG0NmrIeWa2M+l6WHhiw/PBMwLtL3h2kCQ6qLDMsLBA26n6GimagASZNGYlFr 9edvrLFQnyO9Qhqf7HhFE433K8924x+2/SvqBogiBugAxud370jlQsANCG/Pls0W8HNL LUfQ== X-Forwarded-Encrypted: i=1; AJvYcCU+Z7JTs+7pXHZcafeVqGXXi2UettXfaoQKc/YgHqPrNjTObWSJAoH6N2LIvlh5CBNIHW2mFd5WdA==@kvack.org X-Gm-Message-State: AOJu0YzDkUFHbNCv8Ytn+w+1cPe5lfF4k23wPFoDNR0TjP7duLbG8ypR EozKfvsgJxdQHdgjCITDq8lP6xpp42GJ8SSDp5nR2IRiS/ecFcC6f04/L8sxvBzjB0tC7ytrks4 B6Q== X-Google-Smtp-Source: AGHT+IGvE8YVAHPC8Xc9V5y9NTovVckrMEjQEriOsrbb9o0xtzvcfxg/meKcTLOv1CFtxoMOk8jWgbTqBhU= X-Received: from pguy5.prod.google.com ([2002:a65:6c05:0:b0:7fd:4075:406d]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:841c:b0:1e1:a9dd:5a58 with SMTP id adf61e73a8af0-1e88d0b6d05mr23322693637.30.1736569601794; Fri, 10 Jan 2025 20:26:41 -0800 (PST) Date: Fri, 10 Jan 2025 20:26:03 -0800 In-Reply-To: <20250111042604.3230628-1-surenb@google.com> Mime-Version: 1.0 References: <20250111042604.3230628-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250111042604.3230628-17-surenb@google.com> Subject: [PATCH v9 16/17] mm: make vma cache SLAB_TYPESAFE_BY_RCU From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, david.laight.linux@gmail.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Stat-Signature: rcdfpmz38pwk57x9zfhwzkhga6x5nojs X-Rspamd-Queue-Id: 09DC51C000A X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1736569602-212574 X-HE-Meta: U2FsdGVkX18K2IRpYVRi9k72ub/Pq2QAI7qjD2ytqlEax2cK8/b4IzIjr05vs65dBPNYsWyVMZA+nhCEGKRjz85MJ/H4wRWkeZa4JBgsfOd4BCz9L/715TU7PZVPzThA8wadgS1XORY3bv3BR395t8vpPWQ6niVH3wyhvy2hJGP50T17WdSPThpWBtYkd46UyfbrYd59IAb03bML+gJcMl1+nmhakf+0t2Mi8KMvYw3z/PDT0PexMS5ey+EZioYCFTNku0BZ4RRcFl0VSJAXkjGU1RevDFB9vK7GhFMIVcgYQVXN9NyC6ZkN1Xt1hliOxl194mKjBilK6HIMrymXQZKCCPW3LFMSkv0q59JnjpEeetO2QMf80j3aJWHA4ylFkBVNsFxMTVvsDOk8ATNQ+8TsUgwk1jlVRlbDdJJGrDnF3GSiB9/n+hPasNDXSLtMDLQB9CE0b+zTUQ3weXjcdoSB44NXjcg8tVYK5TM3TOf/ecBq9P0ELY17tdgenPj2dQaPDCJrTI1FiYVyg0iSViD9ImjheClWXZ/34/jkLWAiC4hKaGjE2t92n5YnkijpwilvcOO7NRVWnGzDoRd/4ILqrAV7k3zUMudbgRlSrpxwao56AQCuMzWMTHejaV/UlMOMj9rTE9kO1pTfW+q0ZH+N1cOYU/lFcmc4OnZqjFZ8gA9FEOudxJfhByc4VPkI9346YJmuIMg4E1JHCLmMifd2zOGKJF63DmTTn/8lcMw9mnyBD+1xnXOORxG4+JIDJqDMRholgU9gdRwQckn9aHf3B1IbW5FL+4eAPlga07D7ls+ZtN89+yxiaeTzd12LMzQK6NwHsHkyvMryI/ASS56ovVVhregkhqiWJdGnlQy7w/8Qy0jtVwH9IoPmnUhZ5nzGSwBMaXhyqD/EYFKm5hHqXD6xCsF85BLC5uIyM8JZT5WVxg5JoO1YnwKvh/mYaoEm4Eathxwrhd6My9n U78cEV/x lEySXkCS/AJQmHw3VtUbhdFLrIlOay3bUFC9AhldPAkS0+rd/In4bCxNHBdI6y58fG2BhPaShG3Vh/7NPSRQf+2Bx/jfUB9IpmTxVTxlpBKSE4/d0+g6g8XhKswXvsWktkgb+EHEvjk7XUBkxmS7WjSHxsa1MTZnT4R5/mnFPSZSDjFpuwfUrj3dLzppYdDCGK/eyZUnBgO5CL8WWhMOmpI4175m626dyO0DL/iRaUOPH1OLnTVFWeum3M2q8JUH2Dvd0OOekcrf6cLEJPs41sjnMaX5yh6GulXSIzHQfow88vf3Vs0C0Kd77GMu30UnpVy8rLqieJd2EX+P13TLfrqUpOE8dF7TTtdr6ges927/nCHDsyH9RtTmZ+fFpTnQBHySUfhiAdbKhdBaBKxfJbAljjsfvwxiRk5Jka1glAVb/IUJjt2XbY+0K+U+s0Ci+L0RxhkD0+A+HeUPGBC+pBV3mMlMf/YO/S8UR91T0c2x+aj+dPvKnKqiHIdNBFuPVNMSeXWRYA7qoUK6v9WkGBIOlDh0h6M9xOiQ0WtB9g38PZwlYF5an5gllZS2u6wI0fu3XgT8uXYXJ7V54hdMe/0cHt4C4YpZkopp6MfjuQPC88wgOF3UqacGvq5PPbg0YoJWHnUC5ADDVJdorq0/pjoFgZ1+kYjW4fPmc X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To enable SLAB_TYPESAFE_BY_RCU for vma cache we need to ensure that object reuse before RCU grace period is over will be detected by lock_vma_under_rcu(). Current checks are sufficient as long as vma is detached before it is freed. The only place this is not currently happening is in exit_mmap(). Add the missing vma_mark_detached() in exit_mmap(). Another issue which might trick lock_vma_under_rcu() during vma reuse is vm_area_dup(), which copies the entire content of the vma into a new one, overriding new vma's vm_refcnt and temporarily making it appear as attached. This might trick a racing lock_vma_under_rcu() to operate on a reused vma if it found the vma before it got reused. To prevent this situation, we should ensure that vm_refcnt stays at detached state (0) when it is copied and advances to attached state only after it is added into the vma tree. Introduce vm_area_init_from() which preserves new vma's vm_refcnt and use it in vm_area_dup(). Since all vmas are in detached state with no current readers when they are freed, lock_vma_under_rcu() will not be able to take vm_refcnt after vma got detached even if vma is reused. Finally, make vm_area_cachep SLAB_TYPESAFE_BY_RCU. This will facilitate vm_area_struct reuse and will minimize the number of call_rcu() calls. Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 2 - include/linux/mm_types.h | 13 ++++-- include/linux/slab.h | 6 --- kernel/fork.c | 73 ++++++++++++++++++++------------ mm/mmap.c | 3 +- mm/vma.c | 11 ++--- mm/vma.h | 2 +- tools/testing/vma/vma_internal.h | 7 +-- 8 files changed, 63 insertions(+), 54 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index cb29eb7360c5..ac78425e9838 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -258,8 +258,6 @@ void setup_initial_init_mm(void *start_code, void *end_code, struct vm_area_struct *vm_area_alloc(struct mm_struct *); struct vm_area_struct *vm_area_dup(struct vm_area_struct *); void vm_area_free(struct vm_area_struct *); -/* Use only if VMA has no other users */ -void __vm_area_free(struct vm_area_struct *vma); #ifndef CONFIG_MMU extern struct rb_root nommu_region_tree; diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index d902e6730654..d366ec6302e6 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -574,6 +574,12 @@ static inline void *folio_get_private(struct folio *folio) typedef unsigned long vm_flags_t; +/* + * freeptr_t represents a SLUB freelist pointer, which might be encoded + * and not dereferenceable if CONFIG_SLAB_FREELIST_HARDENED is enabled. + */ +typedef struct { unsigned long v; } freeptr_t; + /* * A region containing a mapping of a non-memory backed file under NOMMU * conditions. These are held in a global tree and are pinned by the VMAs that @@ -677,6 +683,9 @@ struct vma_numab_state { * * Only explicitly marked struct members may be accessed by RCU readers before * getting a stable reference. + * + * WARNING: when adding new members, please update vm_area_init_from() to copy + * them during vm_area_struct content duplication. */ struct vm_area_struct { /* The first cache line has the info for VMA tree walking. */ @@ -687,9 +696,7 @@ struct vm_area_struct { unsigned long vm_start; unsigned long vm_end; }; -#ifdef CONFIG_PER_VMA_LOCK - struct rcu_head vm_rcu; /* Used for deferred freeing. */ -#endif + freeptr_t vm_freeptr; /* Pointer used by SLAB_TYPESAFE_BY_RCU */ }; /* diff --git a/include/linux/slab.h b/include/linux/slab.h index 10a971c2bde3..681b685b6c4e 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -234,12 +234,6 @@ enum _slab_flag_bits { #define SLAB_NO_OBJ_EXT __SLAB_FLAG_UNUSED #endif -/* - * freeptr_t represents a SLUB freelist pointer, which might be encoded - * and not dereferenceable if CONFIG_SLAB_FREELIST_HARDENED is enabled. - */ -typedef struct { unsigned long v; } freeptr_t; - /* * ZERO_SIZE_PTR will be returned for zero sized kmalloc requests. * diff --git a/kernel/fork.c b/kernel/fork.c index 9d9275783cf8..151b40627c14 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -449,6 +449,42 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) return vma; } +static void vm_area_init_from(const struct vm_area_struct *src, + struct vm_area_struct *dest) +{ + dest->vm_mm = src->vm_mm; + dest->vm_ops = src->vm_ops; + dest->vm_start = src->vm_start; + dest->vm_end = src->vm_end; + dest->anon_vma = src->anon_vma; + dest->vm_pgoff = src->vm_pgoff; + dest->vm_file = src->vm_file; + dest->vm_private_data = src->vm_private_data; + vm_flags_init(dest, src->vm_flags); + memcpy(&dest->vm_page_prot, &src->vm_page_prot, + sizeof(dest->vm_page_prot)); + /* + * src->shared.rb may be modified concurrently when called from + * dup_mmap(), but the clone will reinitialize it. + */ + data_race(memcpy(&dest->shared, &src->shared, sizeof(dest->shared))); + memcpy(&dest->vm_userfaultfd_ctx, &src->vm_userfaultfd_ctx, + sizeof(dest->vm_userfaultfd_ctx)); +#ifdef CONFIG_ANON_VMA_NAME + dest->anon_name = src->anon_name; +#endif +#ifdef CONFIG_SWAP + memcpy(&dest->swap_readahead_info, &src->swap_readahead_info, + sizeof(dest->swap_readahead_info)); +#endif +#ifndef CONFIG_MMU + dest->vm_region = src->vm_region; +#endif +#ifdef CONFIG_NUMA + dest->vm_policy = src->vm_policy; +#endif +} + struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) { struct vm_area_struct *new = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL); @@ -458,11 +494,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) ASSERT_EXCLUSIVE_WRITER(orig->vm_flags); ASSERT_EXCLUSIVE_WRITER(orig->vm_file); - /* - * orig->shared.rb may be modified concurrently, but the clone - * will be reinitialized. - */ - data_race(memcpy(new, orig, sizeof(*new))); + vm_area_init_from(orig, new); vma_lock_init(new, true); INIT_LIST_HEAD(&new->anon_vma_chain); vma_numab_state_init(new); @@ -471,7 +503,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) return new; } -void __vm_area_free(struct vm_area_struct *vma) +void vm_area_free(struct vm_area_struct *vma) { /* The vma should be detached while being destroyed. */ vma_assert_detached(vma); @@ -480,25 +512,6 @@ void __vm_area_free(struct vm_area_struct *vma) kmem_cache_free(vm_area_cachep, vma); } -#ifdef CONFIG_PER_VMA_LOCK -static void vm_area_free_rcu_cb(struct rcu_head *head) -{ - struct vm_area_struct *vma = container_of(head, struct vm_area_struct, - vm_rcu); - - __vm_area_free(vma); -} -#endif - -void vm_area_free(struct vm_area_struct *vma) -{ -#ifdef CONFIG_PER_VMA_LOCK - call_rcu(&vma->vm_rcu, vm_area_free_rcu_cb); -#else - __vm_area_free(vma); -#endif -} - static void account_kernel_stack(struct task_struct *tsk, int account) { if (IS_ENABLED(CONFIG_VMAP_STACK)) { @@ -3144,6 +3157,11 @@ void __init mm_cache_init(void) void __init proc_caches_init(void) { + struct kmem_cache_args args = { + .use_freeptr_offset = true, + .freeptr_offset = offsetof(struct vm_area_struct, vm_freeptr), + }; + sighand_cachep = kmem_cache_create("sighand_cache", sizeof(struct sighand_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU| @@ -3160,8 +3178,9 @@ void __init proc_caches_init(void) sizeof(struct fs_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, NULL); - vm_area_cachep = KMEM_CACHE(vm_area_struct, - SLAB_HWCACHE_ALIGN|SLAB_NO_MERGE|SLAB_PANIC| + vm_area_cachep = kmem_cache_create("vm_area_struct", + sizeof(struct vm_area_struct), &args, + SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU| SLAB_ACCOUNT); mmap_init(); nsproxy_cache_init(); diff --git a/mm/mmap.c b/mm/mmap.c index cda01071c7b1..7aa36216ecc0 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1305,7 +1305,8 @@ void exit_mmap(struct mm_struct *mm) do { if (vma->vm_flags & VM_ACCOUNT) nr_accounted += vma_pages(vma); - remove_vma(vma, /* unreachable = */ true); + vma_mark_detached(vma); + remove_vma(vma); count++; cond_resched(); vma = vma_next(&vmi); diff --git a/mm/vma.c b/mm/vma.c index 93ff42ac2002..0a5158d611e3 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -406,19 +406,14 @@ static bool can_vma_merge_right(struct vma_merge_struct *vmg, /* * Close a vm structure and free it. */ -void remove_vma(struct vm_area_struct *vma, bool unreachable) +void remove_vma(struct vm_area_struct *vma) { might_sleep(); vma_close(vma); if (vma->vm_file) fput(vma->vm_file); mpol_put(vma_policy(vma)); - if (unreachable) { - vma_mark_detached(vma); - __vm_area_free(vma); - } else { - vm_area_free(vma); - } + vm_area_free(vma); } /* @@ -1201,7 +1196,7 @@ static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms, /* Remove and clean up vmas */ mas_set(mas_detach, 0); mas_for_each(mas_detach, vma, ULONG_MAX) - remove_vma(vma, /* unreachable = */ false); + remove_vma(vma); vm_unacct_memory(vms->nr_accounted); validate_mm(mm); diff --git a/mm/vma.h b/mm/vma.h index 63dd38d5230c..f51005b95b39 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -170,7 +170,7 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm, unsigned long start, size_t len, struct list_head *uf, bool unlock); -void remove_vma(struct vm_area_struct *vma, bool unreachable); +void remove_vma(struct vm_area_struct *vma); void unmap_region(struct ma_state *mas, struct vm_area_struct *vma, struct vm_area_struct *prev, struct vm_area_struct *next); diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index 2ce032943861..49a85ce0d45a 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -697,14 +697,9 @@ static inline void mpol_put(struct mempolicy *) { } -static inline void __vm_area_free(struct vm_area_struct *vma) -{ - free(vma); -} - static inline void vm_area_free(struct vm_area_struct *vma) { - __vm_area_free(vma); + free(vma); } static inline void lru_add_drain(void) From patchwork Sat Jan 11 04:26:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13935783 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDF52E7719C for ; Sat, 11 Jan 2025 04:26:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5B9A76B00AB; Fri, 10 Jan 2025 23:26:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 566996B00AD; Fri, 10 Jan 2025 23:26:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3BAE26B00AE; Fri, 10 Jan 2025 23:26:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 196306B00AB for ; Fri, 10 Jan 2025 23:26:47 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id BFC4A1210C0 for ; Sat, 11 Jan 2025 04:26:46 +0000 (UTC) X-FDA: 82993885212.12.BB59F51 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf15.hostedemail.com (Postfix) with ESMTP id E333BA0009 for ; Sat, 11 Jan 2025 04:26:44 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=h85KBgcZ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf15.hostedemail.com: domain of 3A_OBZwYKCDQikhUdRWeeWbU.SecbYdkn-ccalQSa.ehW@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3A_OBZwYKCDQikhUdRWeeWbU.SecbYdkn-ccalQSa.ehW@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736569605; a=rsa-sha256; cv=none; b=7GaT7HQXUO0610vEQkCmNgUa08q2cwpgEIiYHduKW08lfOF3yffF9Mk7IjS1soa7fljl9+ OkufUv5dgVsC7Lhs1uerqYTA9bUiI7P2LpLUtkIcSGT5EzDFTk1kV+9uPckZZFXPdDI6eT 6rCcqLova/RP3+59wJ4osKl7X94SqSE= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=h85KBgcZ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf15.hostedemail.com: domain of 3A_OBZwYKCDQikhUdRWeeWbU.SecbYdkn-ccalQSa.ehW@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3A_OBZwYKCDQikhUdRWeeWbU.SecbYdkn-ccalQSa.ehW@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736569605; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jMJU/3gn80Bp5VROzRR3pKDE/HRHDjLKawYxTpkzfvE=; b=2ySxoMFMwo0U5KW0CmNh1zfCjyV+YFoiidA4zcf3B7BJdWochCz4rX3+df1R4SWiqxmglN z7cnaRiZXY51EgS6qHOvrz16iUsUiGMX6KXFwLpjoYQmHJWxYrxbV4neCRq0xVR/gj2eIT AXzwXZzpAzlzArwoyUQQkqgZDDlzb7I= Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2ef35de8901so4687333a91.3 for ; Fri, 10 Jan 2025 20:26:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736569604; x=1737174404; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jMJU/3gn80Bp5VROzRR3pKDE/HRHDjLKawYxTpkzfvE=; b=h85KBgcZh/QtO45OR3roctv5VrF/seDdd2DLXlfll5QfoLeESIq3leeCQ91xBFg25y 8IuEuHf/MmZJnNIJ4ZmYj83RrcZPcLrZ/Zr+91vWt1Is7o09PqB9v+tbKQDjvLS6l0Vq n4U54epKo4tGdeQQ4E7FX0zDlRakv3VUrDlbR5ptT0Y1Jr/0N8aiTalHHcvDvXVzmSjz 9juezL22Q6KZGOqjL5O4dDRqJY5rd7Lna+WCZ0kgbskttOKYRqhQYqdlgLii6D67nZKe COxpB1izWsd4za5JaFfUG5VHc6mXeuAuJcjcBzxDF+UGuPGRXCL9keVPnSlCS6fUPHJv m4HQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736569604; x=1737174404; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jMJU/3gn80Bp5VROzRR3pKDE/HRHDjLKawYxTpkzfvE=; b=j8N+k5HgrBB78IbQfEwrQUgTmMPNsQj3PJHsjpihg+Zt4YT7pH2cD+oEChCzjtTkV0 0ijsmRg0eWfj7WlsyJOplip/P8/IUYgGMuGKL9ZfqwEtyr1aCwyKYdkcWJsATblZtPoa ZCDCyN+gz9pi3kCrico3PP9EwQDnRV07sZYXM4ONkD7+ldyQCQ6QU+QCgxTULCHmfE3H KTefOzMUfVD3XB8LeaSRlNMxErpPjomLHZkX2LGoRq78cfPjnWatiubh3FFNtN18xvEU QWur3K5SEP+adbZ4HinJJYhJUd/0knyHB4p9+8lVOvGbmNBoSy6y3WIrAHvmG2mbeZxD d1Dg== X-Forwarded-Encrypted: i=1; AJvYcCVurnZcbW2dIBNMmIT2mDFcsZHKy6TRQwRu/8DSqRMkkVQuVvkNXbyIeGoh4LUvhnksgW6Cj+1hHA==@kvack.org X-Gm-Message-State: AOJu0YwZnFdhWxPCNd8HWrR50JHAsLl3fm348Q1wn6Y2wTZxs1D8m2wx jLs2ISfII9X7afV2DydfH1+Y+MpbnNCHyz5PEJQZ4tmItMgqry9V6NAxbjmNUQygcUCBCJrNPcI /Pw== X-Google-Smtp-Source: AGHT+IHMJpbQc3YVt8D3m2qQgogBlgT8FIMa0JXsY8T6EvdL/cAJ/CFqkCirBdGHXn7gzQWGFp3GMnON+/M= X-Received: from pjbqd11.prod.google.com ([2002:a17:90b:3ccb:b0:2e5:8726:a956]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:520e:b0:2ef:67c2:4030 with SMTP id 98e67ed59e1d1-2f548f4ea90mr18504335a91.27.1736569603731; Fri, 10 Jan 2025 20:26:43 -0800 (PST) Date: Fri, 10 Jan 2025 20:26:04 -0800 In-Reply-To: <20250111042604.3230628-1-surenb@google.com> Mime-Version: 1.0 References: <20250111042604.3230628-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250111042604.3230628-18-surenb@google.com> Subject: [PATCH v9 17/17] docs/mm: document latest changes to vm_lock From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, david.laight.linux@gmail.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com, "Liam R. Howlett" X-Stat-Signature: 8xuobbnsysoncexbnop4zwiydd5ixjg6 X-Rspam-User: X-Rspamd-Queue-Id: E333BA0009 X-Rspamd-Server: rspam08 X-HE-Tag: 1736569604-978697 X-HE-Meta: U2FsdGVkX1/AAjFOlyDcrzfXNnqN1bJyPOUhgN3KR3IXH5enIRa61ICJolrzgxdf3tOOvpR9t7pEGQ8/+aiXpa7Vd20evzLufF3yFcoyCLfT/hZzm/VjROuiz1a2VjPQgs9JqL8KA2HbCzPTz1mq2xGbnV/k+h8zSYEf2f8rwwR0M+pzvAGtaSmnjvK1N6p4IOGaog+Ki02oRzEOh11Mf1wDqOTLI4d22V7Iug+3hO+QQHHd8wsr3MBBTlgM9GAK5kDkkCX+tVFfJizjjgOVxyrPteOoC/23sn3jJ9U1nTUJmOs2Kt50SLh43zm7nxCjiFtR5ze2pwy8O+4paB3ZspatRlU3Em61rVg5HVuAblG6b25y9Ce/tHv1HtgxSozXUD5mG+c/WejTcmdgsWYFND2CWT8TxRkV68cpkE6lE77Wjzl1pYxUUZgLxsVtd20U0L7YLzTZS12euLhOUAa9K0pQZD+DvGcj5xvSfXF5zdZNPDib0n+zZoFlluIhforxnj2yTCErh6GNl3sIA5unq8Qa2NaWheKAVhctAOCjr2c4S/4VtO3Zk/cL/E9Bgy6BVIRcwZ0JwXFuSt3g8wSxvj4kIDoVEsEDCL1ulDvaR7uJ3BONjSJCxQ3e7lsM9//6jcCboRTeri6RwJA+KwJFiRshB0NOChwqDa8wWcLU7bGKxrg4AXAe80iFXjwonyvX8BkQiJwLmXd5MnRCpQglTE0pdinsBAfy3a++Nsbpf+x4zGTMhXmc/q7so9mJZJP43BngESK3uKfUuO+fUNDx/wNB/6Ac3S92f5p1ygLgt61Jy3+PPYw8u1G/YzXCCB+Ueh0EvvfSvubKwLs/K1ScDs4Pl1XiQQN7RpoFZjuimqFSmrr9ZwZNc7OlVX4ayRSAjgjv7Nq1GCNDyYAmlbl7vrc1wgAKvvDOVqIh4JkQrO8OMLJcPPpUURaW+JnH2j6ERY4irlDbSClN87ozNYY 8KaFE5Kk 8OPNQ5BybHeEc5gvjtA5+FTXB1giP/tv5ZI7q64I4clj/Y8ihhW7aR9lSbpysHFCbtQE7QFzciuBbAnkD8n+Ep5QQLzS79z0XgxlKzfD6TN3zBFmpbZ3siiTzMBEjNv4MS2FCf6IXk3kzaFtUGKzWgK1viAwuUlKzEfWlpw4LaKVCHEKRwLctATN05WykVbR7bv2J+JFwFIG5zpUTFHFbHcmuCHEr3x7NzlcIAlb/CYXOvAlPMWAHB6NUUq++qhKVmXXfvYbUZgA6TvRIYY3UssuSVq2VC8W3qU0KlZWr4RpxDz0V9As5XofSEf+MUEulXGPtnKo/62enM3PBtUVs3afQoU+rVDZqbt8NAPnliOv+dblemBVxgfdiS9WvAMUmv/+DoGaxhNw4VqbY6ci7ml7YDpsgxLNiIPqmTX+rnqdKRWf2nzfyev/gHIftVpdrWJELnynWZLSaNwCpqhnXwvX6t28JIMuMFKpYFoCawIejwr11Vu4ubZ5uOkbOCjWaarDCPt3V9x8qonH2DZwfZtGfbwbo9XE8Z2SOeu+RgGYaOCG4MRQxRP6z7GUpRWsDSy2jxiAprzMDcixUupVpn/1DL/uFwKivIQ51QVr2eHpRD7StPtcBHTNcCLz1JSH04AHrHzLM+6/tES8jgXpylFJSAEFMXymryFRPGkkKiN8tl7PVe+S/mNJQDw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000205, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Change the documentation to reflect that vm_lock is integrated into vma and replaced with vm_refcnt. Document newly introduced vma_start_read_locked{_nested} functions. Signed-off-by: Suren Baghdasaryan Reviewed-by: Liam R. Howlett --- Documentation/mm/process_addrs.rst | 44 ++++++++++++++++++------------ 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/Documentation/mm/process_addrs.rst b/Documentation/mm/process_addrs.rst index 81417fa2ed20..f573de936b5d 100644 --- a/Documentation/mm/process_addrs.rst +++ b/Documentation/mm/process_addrs.rst @@ -716,9 +716,14 @@ calls :c:func:`!rcu_read_lock` to ensure that the VMA is looked up in an RCU critical section, then attempts to VMA lock it via :c:func:`!vma_start_read`, before releasing the RCU lock via :c:func:`!rcu_read_unlock`. -VMA read locks hold the read lock on the :c:member:`!vma->vm_lock` semaphore for -their duration and the caller of :c:func:`!lock_vma_under_rcu` must release it -via :c:func:`!vma_end_read`. +In cases when the user already holds mmap read lock, :c:func:`!vma_start_read_locked` +and :c:func:`!vma_start_read_locked_nested` can be used. These functions do not +fail due to lock contention but the caller should still check their return values +in case they fail for other reasons. + +VMA read locks increment :c:member:`!vma.vm_refcnt` reference counter for their +duration and the caller of :c:func:`!lock_vma_under_rcu` must drop it via +:c:func:`!vma_end_read`. VMA **write** locks are acquired via :c:func:`!vma_start_write` in instances where a VMA is about to be modified, unlike :c:func:`!vma_start_read` the lock is always @@ -726,9 +731,9 @@ acquired. An mmap write lock **must** be held for the duration of the VMA write lock, releasing or downgrading the mmap write lock also releases the VMA write lock so there is no :c:func:`!vma_end_write` function. -Note that a semaphore write lock is not held across a VMA lock. Rather, a -sequence number is used for serialisation, and the write semaphore is only -acquired at the point of write lock to update this. +Note that when write-locking a VMA lock, the :c:member:`!vma.vm_refcnt` is temporarily +modified so that readers can detect the presense of a writer. The reference counter is +restored once the vma sequence number used for serialisation is updated. This ensures the semantics we require - VMA write locks provide exclusive write access to the VMA. @@ -738,7 +743,7 @@ Implementation details The VMA lock mechanism is designed to be a lightweight means of avoiding the use of the heavily contended mmap lock. It is implemented using a combination of a -read/write semaphore and sequence numbers belonging to the containing +reference counter and sequence numbers belonging to the containing :c:struct:`!struct mm_struct` and the VMA. Read locks are acquired via :c:func:`!vma_start_read`, which is an optimistic @@ -779,28 +784,31 @@ release of any VMA locks on its release makes sense, as you would never want to keep VMAs locked across entirely separate write operations. It also maintains correct lock ordering. -Each time a VMA read lock is acquired, we acquire a read lock on the -:c:member:`!vma->vm_lock` read/write semaphore and hold it, while checking that -the sequence count of the VMA does not match that of the mm. +Each time a VMA read lock is acquired, we increment :c:member:`!vma.vm_refcnt` +reference counter and check that the sequence count of the VMA does not match +that of the mm. -If it does, the read lock fails. If it does not, we hold the lock, excluding -writers, but permitting other readers, who will also obtain this lock under RCU. +If it does, the read lock fails and :c:member:`!vma.vm_refcnt` is dropped. +If it does not, we keep the reference counter raised, excluding writers, but +permitting other readers, who can also obtain this lock under RCU. Importantly, maple tree operations performed in :c:func:`!lock_vma_under_rcu` are also RCU safe, so the whole read lock operation is guaranteed to function correctly. -On the write side, we acquire a write lock on the :c:member:`!vma->vm_lock` -read/write semaphore, before setting the VMA's sequence number under this lock, -also simultaneously holding the mmap write lock. +On the write side, we set a bit in :c:member:`!vma.vm_refcnt` which can't be +modified by readers and wait for all readers to drop their reference count. +Once there are no readers, VMA's sequence number is set to match that of the +mm. During this entire operation mmap write lock is held. This way, if any read locks are in effect, :c:func:`!vma_start_write` will sleep until these are finished and mutual exclusion is achieved. -After setting the VMA's sequence number, the lock is released, avoiding -complexity with a long-term held write lock. +After setting the VMA's sequence number, the bit in :c:member:`!vma.vm_refcnt` +indicating a writer is cleared. From this point on, VMA's sequence number will +indicate VMA's write-locked state until mmap write lock is dropped or downgraded. -This clever combination of a read/write semaphore and sequence count allows for +This clever combination of a reference counter and sequence count allows for fast RCU-based per-VMA lock acquisition (especially on page fault, though utilised elsewhere) with minimal complexity around lock ordering.