From patchwork Thu Dec 26 17:06:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13921383 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 512E0E7718E for ; Thu, 26 Dec 2024 17:07:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C513F6B0088; Thu, 26 Dec 2024 12:07:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B3E876B0089; Thu, 26 Dec 2024 12:07:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B8786B008A; Thu, 26 Dec 2024 12:07:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 70D4E6B0088 for ; Thu, 26 Dec 2024 12:07:21 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id E990C81127 for ; Thu, 26 Dec 2024 17:07:20 +0000 (UTC) X-FDA: 82937739108.19.A2CB69F Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf26.hostedemail.com (Postfix) with ESMTP id 20CEA14001B for ; Thu, 26 Dec 2024 17:06:46 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=0UiOplix; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of 3RY1tZwYKCFgIKH4D16EE6B4.2ECB8DKN-CCAL02A.EH6@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3RY1tZwYKCFgIKH4D16EE6B4.2ECB8DKN-CCAL02A.EH6@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735232799; a=rsa-sha256; cv=none; b=n0cVkCKbFbbtFxPN5k2CSFHCReOukalnlp299fYf6q1NRTc4hqpHc2GkK4WA0W5uIU47Hy l/FtCyl82vXmoxoIyR3inCz4Tz+gMZq+SpUhPK3icUPtNPDnFh1Vb4Adc3O7x8CDYAG8EE BKdabLYY7YixB8KPdeXgwpehAmRd5SU= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=0UiOplix; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of 3RY1tZwYKCFgIKH4D16EE6B4.2ECB8DKN-CCAL02A.EH6@flex--surenb.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3RY1tZwYKCFgIKH4D16EE6B4.2ECB8DKN-CCAL02A.EH6@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735232799; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6NbeBWVE54m5pl9tXwNyg1Wb6fYDZqP7j0x+BaAe+lA=; b=W4Zv8aF5e+JvB61dT5rc+fwSCn1tO/B76b2fFzzcGjpCzvDUCWAMIfeCQPXgACiMBssHX0 tCL6y2iDP8B9MqalVaiBTJQKWx3kuvfRTHbdx06Be9dzjLlxQ0UbHyuYx85emm1CF418We hIfzlx8QCYP6uEZ+0ZPV97TbWMsFFwA= Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-21638389f63so72142455ad.1 for ; Thu, 26 Dec 2024 09:07:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1735232838; x=1735837638; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6NbeBWVE54m5pl9tXwNyg1Wb6fYDZqP7j0x+BaAe+lA=; b=0UiOplixnpMhDws3YdGXLH3rIsYKkKZYoL2CCZCQVVQw4HlSER8gDZkpzhHYy6F1CA hr5i0ltxIE0AtX4vKt/WBE1whd8X3XLzUSk97sZhorsfjlF2Z7w7pBEZUSrrzLjm8k0F jzy7eSQX4uCb3vg76cauKWW8noZYgyjYuUzEz/7MHeCV2PDzFi5HJ1z5cLv9DWInyLHz Krdxj7NYwqgP92VrniA7IOtT+5GnbxnHC0h8/jNRRjPqfoSA+7vSI7T3DVfXiQ/aFhEn MWeW8ezwx+NlIs0th4Z/sNqAdrWah69KEcXJKMeizqSG3xDb5MqAc25C2XCE0icDAPFk nQ0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735232838; x=1735837638; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6NbeBWVE54m5pl9tXwNyg1Wb6fYDZqP7j0x+BaAe+lA=; b=VihU3//6xLUB8BF7vK/uqRI/8u3+fkbzANbPBEeoOHZebFNC2CrY4oRjhqMHt53U2f 1oRFacTMvG0mIDg8bTiBdgOBhk8OlZZG23m9MD1GYNPCQPuc4MXQVpxI2lVBoULJGXqj Drxf+yyI1Kmmp5fAUBprA9mL5mnKMrKSybmMuKBbcbV2bPLIwItRCHdNLq/jZDC8Eg/k CCFdDX9q2UrXnhQR4nD/PHUPbWuGm8Xu0sul23fTGQWkY+rrVdDDVEUh5+MthD/h1oHs 2vcuajDYIvAT7lKPxDpTg/GuhE5WoxVEFXbwX/YebJEc24T1Eiq/GFGLyr6/q3RoRTLw zCCA== X-Forwarded-Encrypted: i=1; AJvYcCW1oyEMIk2UaJTeYJI48gexkc99edEYU+0wgY9qORfYGjgi7yKzBF+Yzzwtf5fN2+so7LbuHPZl/g==@kvack.org X-Gm-Message-State: AOJu0Yx+ZAwcQh8JnMOtNzs7pn1rAt0+BmJuaQHOZ9BtPdMkInIYZ9J6 1K59Om4EhNGtcjJ3ynnuzv4WfSuzzXb9wNCZ7xm2OH7EVRCg/QyxvP0GHNJ8SUWB9zPqo7dloOP sjA== X-Google-Smtp-Source: AGHT+IF0YYbhM7XYApspOk/uURmaoeCLW8NQlVpD5NmAe0m1TWKTebPNW6STDsV9ROEbHXxPc4+2zc362YA= X-Received: from pfbb7.prod.google.com ([2002:a05:6a00:ac87:b0:728:c6d8:5683]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:3991:b0:1db:ed8a:a607 with SMTP id adf61e73a8af0-1e5e047b457mr38977051637.11.1735232837659; Thu, 26 Dec 2024 09:07:17 -0800 (PST) Date: Thu, 26 Dec 2024 09:06:54 -0800 In-Reply-To: <20241226170710.1159679-1-surenb@google.com> Mime-Version: 1.0 References: <20241226170710.1159679-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241226170710.1159679-3-surenb@google.com> Subject: [PATCH v7 02/17] mm: move per-vma lock into vm_area_struct From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Stat-Signature: ixb9tohukkf1b9xpbp3geye8dh9jsibj X-Rspam-User: X-Rspamd-Queue-Id: 20CEA14001B X-Rspamd-Server: rspam08 X-HE-Tag: 1735232806-721772 X-HE-Meta: U2FsdGVkX1+OI1Abm/pGIzNgnRe5nIYedtF4yU+JkCGFxshn8X9SOtLVVqlOUz7dMyhe8klCTxytp8Dz8sKSBESz+NUklbKHlU/Oz0y+n8bPUC8dWGXCxwZB7LNL5HrZ1OqxqNqtQtHYzZwVxji7DquY31PjDEWV06lkfRlgTdoywuZHmEokNcXE4BvCYHhsJB8Nx893tLjMM00AjGD1RdUAZZTC4sq4C5atNAg1JL8CfMAgXExXKW/N3rvp0pFHo9/AJpuT9ARay1VID3AZnjIJBDFpmKJG3gNu3WINmTxxK8Wb9neMq4vsgb2fSEt8VM2/v9IdHOlEdeSbQiglGak/dfgKyw2MFHTjiqHsxK09kdfS6+YBqIHkI+RGZe1k2ufTs3ZTM736dy8f+8KFleZPMSBz/W/2fQF0GizlaXvu4qFaUp4Fatqkkw+Uuyd2W7g8jRoduGPIPK0v3zGKvCd0kZdHvf995HgXkQmt0ip4h0DTEYJy1+r/q97+R1/rDFqbWRln8GhSNJBaBp1yVSmy/AthHEyingeErHYo2WsC7UUmVikNjPc6yk1ikA80ubef/Rjp1rP6pYhLufnx5FqxOO7mIRqjFJ6xSTVxUpxbn6nyHmj2IyCu6hGMIFph5NBR3J3mRRzPWNwEzDec4esEYZEjKt32Yn0okQdN2qHQLCj6iF8s98zI4NOfOtTiIuoEqbjGe4K+O+nGduo10lIgP/OMIv/Jv9McMEFJFPOCq08iU2ZqYyzmay8cj/dtChRbWoGcBySdigtMtqc8r11L13Wg8M2lR3QcqzNZFDui17znjWYYVvlU517A9aNP4p+uZqIiVxobvzIioFBjpx7bgg0Aa0J8Xm3fnKNOjcD/DMrk1fBURgWRgmX/mCfChgxp1eVLzjXgbfDHsZUEwwpSiwUH4W3OwMHHyJbXl8guSRSFqiBeFaxcxv0E+ZCjrr5dipJtYy+YMP5VfKb ZTwGZt24 lcWDjba0F5yCouN9FX9cQ0sDLNPR31bipIMGm8nrAn6YyHEoGJoIMLKLXga8niwbpKnQpf7tF5o0aZGGxosXJMVElAzcdoovq0nxKbEOi6JWHxHCdRT8PCczCuTVDoPPOaJ7/mQxD42+GWRHuk069pyuJYmtT4NmL7AVbzMKt4tTUAvdWmGypYsJdH/9JdOdauGXYHNvuXHc14NUgr4AE9rC8+lW9H8Bprqy7LlMQApgAudJz5wQRZfnuE5qYKo180tu4ZtY2IM0xrMmANn/u/PokMlBihPuPYjQA2Xif0cfQju+4sLuKhAObwA5LnrWyeJmIO926DlQ3/M0/XvjCenPuTTCaXijOKcCebwrmkQh50yguVVBgwhzAp7sypkF452dIqxBHSkxdtK/8W9pm9Ogfq7YFRZz++JPVlERjAciVmlywiiA44ZWEpE38IdrkgrRLleFUcK7d8VGWCE+r7jdpeBXvaw08i/Qh4pQe5yRMRjPVZV6aE723m1Ex6HD36qVdai0PRI9JPcBwLZwLp3DYNFhXDTlCUcbFL/GA67Zrsrl5ZN1S30INBeMCuZNbWVigF6Qu56E6In/2zX0qXynxNaMPifDbKZRV5cHQeUDHVemtdENXLSrhicSsNJuyu64XvAtpZnyX5t4Z+DiyIsuxvHp2V2HRXHEz X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Back when per-vma locks were introduces, vm_lock was moved out of vm_area_struct in [1] because of the performance regression caused by false cacheline sharing. Recent investigation [2] revealed that the regressions is limited to a rather old Broadwell microarchitecture and even there it can be mitigated by disabling adjacent cacheline prefetching, see [3]. Splitting single logical structure into multiple ones leads to more complicated management, extra pointer dereferences and overall less maintainable code. When that split-away part is a lock, it complicates things even further. With no performance benefits, there are no reasons for this split. Merging the vm_lock back into vm_area_struct also allows vm_area_struct to use SLAB_TYPESAFE_BY_RCU later in this patchset. Move vm_lock back into vm_area_struct, aligning it at the cacheline boundary and changing the cache to be cacheline-aligned as well. With kernel compiled using defconfig, this causes VMA memory consumption to grow from 160 (vm_area_struct) + 40 (vm_lock) bytes to 256 bytes: slabinfo before: ... : ... vma_lock ... 40 102 1 : ... vm_area_struct ... 160 51 2 : ... slabinfo after moving vm_lock: ... : ... vm_area_struct ... 256 32 2 : ... Aggregate VMA memory consumption per 1000 VMAs grows from 50 to 64 pages, which is 5.5MB per 100000 VMAs. Note that the size of this structure is dependent on the kernel configuration and typically the original size is higher than 160 bytes. Therefore these calculations are close to the worst case scenario. A more realistic vm_area_struct usage before this change is: ... : ... vma_lock ... 40 102 1 : ... vm_area_struct ... 176 46 2 : ... Aggregate VMA memory consumption per 1000 VMAs grows from 54 to 64 pages, which is 3.9MB per 100000 VMAs. This memory consumption growth can be addressed later by optimizing the vm_lock. [1] https://lore.kernel.org/all/20230227173632.3292573-34-surenb@google.com/ [2] https://lore.kernel.org/all/ZsQyI%2F087V34JoIt@xsang-OptiPlex-9020/ [3] https://lore.kernel.org/all/CAJuCfpEisU8Lfe96AYJDZ+OM4NoPmnw9bP53cT_kbfP_pR+-2g@mail.gmail.com/ Signed-off-by: Suren Baghdasaryan Reviewed-by: Lorenzo Stoakes Reviewed-by: Shakeel Butt Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 28 ++++++++++-------- include/linux/mm_types.h | 6 ++-- kernel/fork.c | 49 ++++---------------------------- tools/testing/vma/vma_internal.h | 33 +++++---------------- 4 files changed, 32 insertions(+), 84 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a48e207d25f2..f3f92ba8f5fe 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -697,6 +697,12 @@ static inline void vma_numab_state_free(struct vm_area_struct *vma) {} #endif /* CONFIG_NUMA_BALANCING */ #ifdef CONFIG_PER_VMA_LOCK +static inline void vma_lock_init(struct vm_area_struct *vma) +{ + init_rwsem(&vma->vm_lock.lock); + vma->vm_lock_seq = UINT_MAX; +} + /* * Try to read-lock a vma. The function is allowed to occasionally yield false * locked result to avoid performance overhead, in which case we fall back to @@ -714,7 +720,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence)) return false; - if (unlikely(down_read_trylock(&vma->vm_lock->lock) == 0)) + if (unlikely(down_read_trylock(&vma->vm_lock.lock) == 0)) return false; /* @@ -729,7 +735,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * This pairs with RELEASE semantics in vma_end_write_all(). */ if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { - up_read(&vma->vm_lock->lock); + up_read(&vma->vm_lock.lock); return false; } return true; @@ -744,7 +750,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass) { mmap_assert_locked(vma->vm_mm); - down_read_nested(&vma->vm_lock->lock, subclass); + down_read_nested(&vma->vm_lock.lock, subclass); } /* @@ -756,13 +762,13 @@ static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int static inline void vma_start_read_locked(struct vm_area_struct *vma) { mmap_assert_locked(vma->vm_mm); - down_read(&vma->vm_lock->lock); + down_read(&vma->vm_lock.lock); } static inline void vma_end_read(struct vm_area_struct *vma) { rcu_read_lock(); /* keeps vma alive till the end of up_read */ - up_read(&vma->vm_lock->lock); + up_read(&vma->vm_lock.lock); rcu_read_unlock(); } @@ -791,7 +797,7 @@ static inline void vma_start_write(struct vm_area_struct *vma) if (__is_vma_write_locked(vma, &mm_lock_seq)) return; - down_write(&vma->vm_lock->lock); + down_write(&vma->vm_lock.lock); /* * We should use WRITE_ONCE() here because we can have concurrent reads * from the early lockless pessimistic check in vma_start_read(). @@ -799,7 +805,7 @@ static inline void vma_start_write(struct vm_area_struct *vma) * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. */ WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); - up_write(&vma->vm_lock->lock); + up_write(&vma->vm_lock.lock); } static inline void vma_assert_write_locked(struct vm_area_struct *vma) @@ -811,7 +817,7 @@ static inline void vma_assert_write_locked(struct vm_area_struct *vma) static inline void vma_assert_locked(struct vm_area_struct *vma) { - if (!rwsem_is_locked(&vma->vm_lock->lock)) + if (!rwsem_is_locked(&vma->vm_lock.lock)) vma_assert_write_locked(vma); } @@ -844,6 +850,7 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, #else /* CONFIG_PER_VMA_LOCK */ +static inline void vma_lock_init(struct vm_area_struct *vma) {} static inline bool vma_start_read(struct vm_area_struct *vma) { return false; } static inline void vma_end_read(struct vm_area_struct *vma) {} @@ -878,10 +885,6 @@ static inline void assert_fault_locked(struct vm_fault *vmf) extern const struct vm_operations_struct vma_dummy_vm_ops; -/* - * WARNING: vma_init does not initialize vma->vm_lock. - * Use vm_area_alloc()/vm_area_free() if vma needs locking. - */ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) { memset(vma, 0, sizeof(*vma)); @@ -890,6 +893,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) INIT_LIST_HEAD(&vma->anon_vma_chain); vma_mark_detached(vma, false); vma_numab_state_init(vma); + vma_lock_init(vma); } /* Use when VMA is not part of the VMA tree and needs no locking */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5f1b2dc788e2..6573d95f1d1e 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -730,8 +730,6 @@ struct vm_area_struct { * slowpath. */ unsigned int vm_lock_seq; - /* Unstable RCU readers are allowed to read this. */ - struct vma_lock *vm_lock; #endif /* @@ -784,6 +782,10 @@ struct vm_area_struct { struct vma_numab_state *numab_state; /* NUMA Balancing state */ #endif struct vm_userfaultfd_ctx vm_userfaultfd_ctx; +#ifdef CONFIG_PER_VMA_LOCK + /* Unstable RCU readers are allowed to read this. */ + struct vma_lock vm_lock ____cacheline_aligned_in_smp; +#endif } __randomize_layout; #ifdef CONFIG_NUMA diff --git a/kernel/fork.c b/kernel/fork.c index ded49f18cd95..40a8e615499f 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -436,35 +436,6 @@ static struct kmem_cache *vm_area_cachep; /* SLAB cache for mm_struct structures (tsk->mm) */ static struct kmem_cache *mm_cachep; -#ifdef CONFIG_PER_VMA_LOCK - -/* SLAB cache for vm_area_struct.lock */ -static struct kmem_cache *vma_lock_cachep; - -static bool vma_lock_alloc(struct vm_area_struct *vma) -{ - vma->vm_lock = kmem_cache_alloc(vma_lock_cachep, GFP_KERNEL); - if (!vma->vm_lock) - return false; - - init_rwsem(&vma->vm_lock->lock); - vma->vm_lock_seq = UINT_MAX; - - return true; -} - -static inline void vma_lock_free(struct vm_area_struct *vma) -{ - kmem_cache_free(vma_lock_cachep, vma->vm_lock); -} - -#else /* CONFIG_PER_VMA_LOCK */ - -static inline bool vma_lock_alloc(struct vm_area_struct *vma) { return true; } -static inline void vma_lock_free(struct vm_area_struct *vma) {} - -#endif /* CONFIG_PER_VMA_LOCK */ - struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) { struct vm_area_struct *vma; @@ -474,10 +445,6 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) return NULL; vma_init(vma, mm); - if (!vma_lock_alloc(vma)) { - kmem_cache_free(vm_area_cachep, vma); - return NULL; - } return vma; } @@ -496,10 +463,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) * will be reinitialized. */ data_race(memcpy(new, orig, sizeof(*new))); - if (!vma_lock_alloc(new)) { - kmem_cache_free(vm_area_cachep, new); - return NULL; - } + vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); vma_numab_state_init(new); dup_anon_vma_name(orig, new); @@ -511,7 +475,6 @@ void __vm_area_free(struct vm_area_struct *vma) { vma_numab_state_free(vma); free_anon_vma_name(vma); - vma_lock_free(vma); kmem_cache_free(vm_area_cachep, vma); } @@ -522,7 +485,7 @@ static void vm_area_free_rcu_cb(struct rcu_head *head) vm_rcu); /* The vma should not be locked while being destroyed. */ - VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock->lock), vma); + VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock.lock), vma); __vm_area_free(vma); } #endif @@ -3188,11 +3151,9 @@ void __init proc_caches_init(void) sizeof(struct fs_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, NULL); - - vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_ACCOUNT); -#ifdef CONFIG_PER_VMA_LOCK - vma_lock_cachep = KMEM_CACHE(vma_lock, SLAB_PANIC|SLAB_ACCOUNT); -#endif + vm_area_cachep = KMEM_CACHE(vm_area_struct, + SLAB_HWCACHE_ALIGN|SLAB_NO_MERGE|SLAB_PANIC| + SLAB_ACCOUNT); mmap_init(); nsproxy_cache_init(); } diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index ae635eecbfa8..d19ce6fcab83 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -270,10 +270,10 @@ struct vm_area_struct { /* * Can only be written (using WRITE_ONCE()) while holding both: * - mmap_lock (in write mode) - * - vm_lock->lock (in write mode) + * - vm_lock.lock (in write mode) * Can be read reliably while holding one of: * - mmap_lock (in read or write mode) - * - vm_lock->lock (in read or write mode) + * - vm_lock.lock (in read or write mode) * Can be read unreliably (using READ_ONCE()) for pessimistic bailout * while holding nothing (except RCU to keep the VMA struct allocated). * @@ -282,7 +282,7 @@ struct vm_area_struct { * slowpath. */ unsigned int vm_lock_seq; - struct vma_lock *vm_lock; + struct vma_lock vm_lock; #endif /* @@ -459,17 +459,10 @@ static inline struct vm_area_struct *vma_next(struct vma_iterator *vmi) return mas_find(&vmi->mas, ULONG_MAX); } -static inline bool vma_lock_alloc(struct vm_area_struct *vma) +static inline void vma_lock_init(struct vm_area_struct *vma) { - vma->vm_lock = calloc(1, sizeof(struct vma_lock)); - - if (!vma->vm_lock) - return false; - - init_rwsem(&vma->vm_lock->lock); + init_rwsem(&vma->vm_lock.lock); vma->vm_lock_seq = UINT_MAX; - - return true; } static inline void vma_assert_write_locked(struct vm_area_struct *); @@ -492,6 +485,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); vma_mark_detached(vma, false); + vma_lock_init(vma); } static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) @@ -502,10 +496,6 @@ static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) return NULL; vma_init(vma, mm); - if (!vma_lock_alloc(vma)) { - free(vma); - return NULL; - } return vma; } @@ -518,10 +508,7 @@ static inline struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) return NULL; memcpy(new, orig, sizeof(*new)); - if (!vma_lock_alloc(new)) { - free(new); - return NULL; - } + vma_lock_init(new); INIT_LIST_HEAD(&new->anon_vma_chain); return new; @@ -691,14 +678,8 @@ static inline void mpol_put(struct mempolicy *) { } -static inline void vma_lock_free(struct vm_area_struct *vma) -{ - free(vma->vm_lock); -} - static inline void __vm_area_free(struct vm_area_struct *vma) { - vma_lock_free(vma); free(vma); }