From patchwork Mon Nov 11 20:55:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13871370 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 075B5D3ABF4 for ; Mon, 11 Nov 2024 20:55:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B44D26B00D8; Mon, 11 Nov 2024 15:55:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ACD526B00D9; Mon, 11 Nov 2024 15:55:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8F9CE6B00DA; Mon, 11 Nov 2024 15:55:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6881C6B00D8 for ; Mon, 11 Nov 2024 15:55:20 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 22A78AD61B for ; Mon, 11 Nov 2024 20:55:20 +0000 (UTC) X-FDA: 82775018424.12.EBE9221 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf28.hostedemail.com (Postfix) with ESMTP id B0640C0006 for ; Mon, 11 Nov 2024 20:54:36 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=JVsEw5cz; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf28.hostedemail.com: domain of 3NG8yZwYKCB0LNK7G49HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--surenb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3NG8yZwYKCB0LNK7G49HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731358343; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ifdJ2eX7iGI1j7/6aWcHQxmdsxhJaJa3q49onQy2Ll4=; b=rCFmqJ4Cipl1JxQwTeqrT9A3AE96uTnDx6XrT1Fp1w0PAmmPcHsQJwVHK9SCVUPJAMRZw9 RLKrMZpUlPyMjOrfuHk20K7021K/Stx0iq5QavlBNxJx/niWAqvXCACQDLf1liZBTWQi/v soSYl+XqlW05rxGHv9gXNiDGtp5tfD4= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=JVsEw5cz; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf28.hostedemail.com: domain of 3NG8yZwYKCB0LNK7G49HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--surenb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3NG8yZwYKCB0LNK7G49HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731358343; a=rsa-sha256; cv=none; b=2itCpjVS14NJ41XjluTXoEu3pWaJl76Snu7iPUf+H64g6q4cJ8Tavn6jZ6zqnmDlhe3nQN NlGg3NqnTUR3YTmiyEwKE+kA25q1YiXnai5z4rNfZKyqDAiS/5h0L98sQu9aoGD0akbLBz ewGVy9dAHU8TEcLWsHdZligafPtHRko= Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6eae6aba6d4so46688657b3.3 for ; Mon, 11 Nov 2024 12:55:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731358517; x=1731963317; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ifdJ2eX7iGI1j7/6aWcHQxmdsxhJaJa3q49onQy2Ll4=; b=JVsEw5czbncTjW+2XEynD70WG6OsxUyRL8G1rAl7lYCPFzawJ3QNrUn8o6GanWRk4Z eiSYnVAqExeeOP9YZB68YBP+OyrNzsqiiIaU5/cGwtt1BE3jR3mNtoftY5cziTL4dMA4 bhMbM3b1Kf5A+y6THUPMPt3oH8yQP8gp8vK8RYKIUkXrNWiOPbwc0HZEnSE2rX0jyQxs btCgaJbRz/Cw2YolRbc0JyGMm34bFFozlKNiPVh/aOA7LJAz6cBtQN+yeEt2N6iTlNON CZkh/1JtADTO/9K18psyj7GHJOw9q1nx0z76uLyZCUWLb4PzmG5D86K842IbEPPCwkJy nY9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731358517; x=1731963317; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ifdJ2eX7iGI1j7/6aWcHQxmdsxhJaJa3q49onQy2Ll4=; b=XjDjql1IZdTWdliSeI/msO3bPnUCEi5MDWuHDZLf2xeWIAMPkma2t0l8zK69pYF1BR 0Wku7aOfDGN7GNpk37N5sNdBWtPjNLDLSOFNvtNmgvDth31DSqiBmCbpfsxYIgLhq3Cp F6Y+Oi8N7ipfmBZpZWRGCv5SWk4AeBhHD2z+FZwz4Xvoe2wVZDqeMGeHlaaBEBEetDQk 7g2/itMOmKEtL/fk5CYMukpgbida3cV+Xoa/TsArr90zPn3JIkUBWod4p2Ap96vjdOHY VGwCW3dNIjkoSKPfIJIkNqXx4hHaa7xlitIHoAl0PQ10O7h3reVAJkMJvczBlW4zX3X9 nGaw== X-Forwarded-Encrypted: i=1; AJvYcCU6y+EZr5GvMO6WHaiafytMvPAb2w2AQC4zRJ4L2EFsm2RPlU779Wf1nlEVe6s9AubgjCVvMYzKiQ==@kvack.org X-Gm-Message-State: AOJu0Yzkv9mhrAEAzZ/pVsJWAVy9gqjqvLUvBdMwb1IfabgE3H7z2C5v q12VL5UBSudwZP+8fdFHWSpNTq7X9oEIYYsAH8PJ9fZ+ZtFma2E4ZcS7TOnAXgomGx3jmIKE5q9 GvQ== X-Google-Smtp-Source: AGHT+IHUDRefIenXEALrWCUUFASKqSkv9c9zYj40iRYol9Wb1hfXY9ZKD83QbQvFvfVT3Q58PHh5dYLSl+c= X-Received: from surenb-desktop.mtv.corp.google.com ([2a00:79e0:2e3f:8:53af:d9fa:522d:99b1]) (user=surenb job=sendgmr) by 2002:a25:7144:0:b0:e30:d518:30f1 with SMTP id 3f1490d57ef6-e337f83cceemr4619276.1.1731358516486; Mon, 11 Nov 2024 12:55:16 -0800 (PST) Date: Mon, 11 Nov 2024 12:55:05 -0800 In-Reply-To: <20241111205506.3404479-1-surenb@google.com> Mime-Version: 1.0 References: <20241111205506.3404479-1-surenb@google.com> X-Mailer: git-send-email 2.47.0.277.g8800431eea-goog Message-ID: <20241111205506.3404479-4-surenb@google.com> Subject: [PATCH 3/4] mm: replace rw_semaphore with atomic_t in vma_lock From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: B0640C0006 X-Stat-Signature: rwom8htzo8x7so6hw3p73fkrt8xko6qh X-Rspam-User: X-HE-Tag: 1731358476-859409 X-HE-Meta: U2FsdGVkX1+aV8nVGQjgExo+ej68PjiGHF8vIE3/jsuOaO/aauPSaoPLBtsk4bcj79cXC/irTbrpyOT82IqzlmrmgydWyHzWCM0Ez5ghgbqwnjrPsBf8b+QkcelJSdUrk1vx2dONbBg3y2trRFj+31GILOuhnRwKTT3/RWYSn3Avh/YKKewlZBFnniz+G4lywF8V5ngATY/Dds8b5VFD4LwT8C9ttCYMcqGwV74s/sKsVsxujf/ETLJ85zBdqhJchKmR4QR5uxeTdoirOYmwBFALOI5Xq74oWe+j85wWOLqUTBuriYUaRAVMQnLcV0zZeh/WgAwLazpXM08zjl34evWEWKOdGYpSn1en4Kfrgd0yo/16DFSU4B1N3wWdJlFsWWezqXFmSdN3Uod85LPL8jxTukqe3flJfC77tzJwsGYP69Ij2u2sH+RoG2e6CrPBj671C5T5Y0veEnFVE9OctjwSPCtLL6DrZ6zbgmwuw97rCa5SbVQk3TyPX7j+VLsTzBxH6LSfuHL0Ru3yi6nVD+f1Y7nSXZA77bnRY09Awzp5Dilgs03XIIkt3nSriHAtce98nUlqUtQYThSEZGBvO2FW4xEAvjGeTQTVvRnzk1Qg8452trZNHEUirZE56EjxYzagnL3HuX4ZcdXjeWBz7GQwzYSyD6+aIMtwy/fU9pmpPt3rJrl3rQGbad7DaXR5zDEDuerznt6VvgayBDbMEw3wJ6vkcvnZhEUVw38D3FDM7NgQ2fCOcO4U727K5Ge+UEJkIhn3zutwcBT7R23TlEV2jrYtpe5yrXkyX16OoVEh/VYKhNj2FnuthkV+KXnRCYvE3YLWq56dXUNS3uo8NEhKsOzCsA21Y+UbxhRJl45zDOyomn7b/nYshOEg7XTuHWtWVtlkjIkF9d1SCDFi/6/Miv4T6sZtJi6KuJaQpO0hx6lwMRRTbfV6hhhQIMOrlmrRmu9uVCWbxY1PpgY uFR+mhH2 +0ZakifZZJfpx8CnwJvQMryEB4CBfGBUIqn0TPDJYlLBInVjOYDLsSiKyHYeqzxcahXm99BlicKqauHcKZ+nUF4AvVsnJNyDzbEnY8WfznKG7mmbtM+m/OT68a7EUgfBAiqyFHp/P2ZKuf5yuqGwpvV6WtrLbhw3daU5v6P4ZPm6Lux6VanHJWRnfRzdgoMzGtTBTJ0Htrj+C9oBSmgbwRI+vdaxr0lNf5Otv/+sG/3NaHMebtzxRD3zA+WBeYtTACR8I68k2KOA/fWGb2iGWm2Ucgke6I/XId+eAqk4loO87JzcagQQ/zoffbCI+JSrUm5Id42FFoaK3qXkU3kHh6Km8oWSg/6yXX9KMgn1hNDPeVRl3+nSd3TfD1+2PIELsklFJ/FxE15x2Vq9il1+3edPvp60BV4tveJgg9d0z+b/iLxY5OGyEscxwem4/b/S8Bpx66/UGWV0+v4wtltY/Jegml+Do7b78yT1K1u7DXxijLonZDZlNqa3anYiNeCHsaLe7QnqU96g7zx5gL+Gx1dFAv/sE+IvDa1tn3bWHuhCvLVIm6M2D1AbPSteukrKeqqi051ODX9zDuL/5xwYCO5KGcg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: rw_semaphore is a sizable structure of 40 bytes and consumes considerable space for each vm_area_struct. However vma_lock has two important specifics which can be used to replace rw_semaphore with a simpler structure: 1. Readers never wait. They try to take the vma_lock and fall back to mmap_lock if that fails. 2. Only one writer at a time will ever try to write-lock a vma_lock because writers first take mmap_lock in write mode. Because of these requirements, full rw_semaphore functionality is not needed and we can replace rw_semaphore with an atomic variable. When a reader takes read lock, it increments the atomic, unless the top two bits are set indicating a writer is present. When writer takes write lock, it sets VMA_LOCK_WR_LOCKED bit if there are no readers or VMA_LOCK_WR_WAIT bit if readers are holding the lock and puts itself onto newly introduced mm.vma_writer_wait. Since all writers take mmap_lock in write mode first, there can be only one writer at a time. The last reader to release the lock will signal the writer to wake up. atomic_t might overflow if there are many competing readers, therefore vma_start_read() implements an overflow check and if that occurs it exits with a failure to lock. vma_start_read_locked{_nested} may cause an overflow but it is later handled by __vma_end_read(). Signed-off-by: Suren Baghdasaryan --- include/linux/mm.h | 142 ++++++++++++++++++++++++++++++++++---- include/linux/mm_types.h | 18 ++++- include/linux/mmap_lock.h | 3 + kernel/fork.c | 2 +- mm/init-mm.c | 2 + 5 files changed, 151 insertions(+), 16 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index c1c2899464db..27c0e9ba81c4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -686,7 +686,41 @@ static inline void vma_numab_state_free(struct vm_area_struct *vma) {} #ifdef CONFIG_PER_VMA_LOCK static inline void vma_lock_init(struct vma_lock *vm_lock) { - init_rwsem(&vm_lock->lock); +#ifdef CONFIG_DEBUG_LOCK_ALLOC + static struct lock_class_key lockdep_key; + + lockdep_init_map(&vm_lock->dep_map, "vm_lock", &lockdep_key, 0); +#endif + atomic_set(&vm_lock->count, VMA_LOCK_UNLOCKED); +} + +static inline unsigned int vma_lock_reader_count(unsigned int counter) +{ + return counter & VMA_LOCK_RD_COUNT_MASK; +} + +static inline void __vma_end_read(struct mm_struct *mm, struct vm_area_struct *vma) +{ + unsigned int count, prev, new; + + count = (unsigned int)atomic_read(&vma->vm_lock.count); + for (;;) { + if (unlikely(vma_lock_reader_count(count) == 0)) { + /* + * Overflow was possible in vma_start_read_locked(). + * When detected, wrap around preserving writer bits. + */ + new = count | ~(VMA_LOCK_WR_LOCKED | VMA_LOCK_WR_WAIT); + } else + new = count - 1; + prev = atomic_cmpxchg(&vma->vm_lock.count, count, new); + if (prev == count) + break; + count = prev; + } + rwsem_release(&vma->vm_lock.dep_map, _RET_IP_); + if (vma_lock_reader_count(new) == 0 && (new & VMA_LOCK_WR_WAIT)) + wake_up(&mm->vma_writer_wait); } /* @@ -696,6 +730,9 @@ static inline void vma_lock_init(struct vma_lock *vm_lock) */ static inline bool vma_start_read(struct vm_area_struct *vma) { + struct mm_struct *mm = vma->vm_mm; + unsigned int count, prev, new; + /* * Check before locking. A race might cause false locked result. * We can use READ_ONCE() for the mm_lock_seq here, and don't need @@ -703,11 +740,35 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * we don't rely on for anything - the mm_lock_seq read against which we * need ordering is below. */ - if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence)) + if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(mm->mm_lock_seq.sequence)) return false; - if (unlikely(down_read_trylock(&vma->vm_lock.lock) == 0)) - return false; + rwsem_acquire_read(&vma->vm_lock.dep_map, 0, 0, _RET_IP_); + count = (unsigned int)atomic_read(&vma->vm_lock.count); + for (;;) { + /* Is VMA is write-locked or writer is waiting? */ + if (count & (VMA_LOCK_WR_LOCKED | VMA_LOCK_WR_WAIT)) { + rwsem_release(&vma->vm_lock.dep_map, _RET_IP_); + return false; + } + + new = count + 1; + /* If atomic_t overflows, fail to lock. */ + if (new & (VMA_LOCK_WR_LOCKED | VMA_LOCK_WR_WAIT)) { + rwsem_release(&vma->vm_lock.dep_map, _RET_IP_); + return false; + } + + /* + * Atomic RMW will provide implicit mb on success to pair with smp_wmb in + * vma_start_write, on failure we retry. + */ + prev = atomic_cmpxchg(&vma->vm_lock.count, count, new); + if (prev == count) + break; + count = prev; + } + lock_acquired(&vma->vm_lock.dep_map, _RET_IP_); /* * Overflow might produce false locked result. @@ -720,8 +781,8 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * after it has been unlocked. * This pairs with RELEASE semantics in vma_end_write_all(). */ - if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { - up_read(&vma->vm_lock.lock); + if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&mm->mm_lock_seq))) { + __vma_end_read(mm, vma); return false; } return true; @@ -733,8 +794,30 @@ static inline bool vma_start_read(struct vm_area_struct *vma) */ static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass) { - mmap_assert_locked(vma->vm_mm); - down_read_nested(&vma->vm_lock.lock, subclass); + struct mm_struct *mm = vma->vm_mm; + unsigned int count, prev, new; + + mmap_assert_locked(mm); + + rwsem_acquire_read(&vma->vm_lock.dep_map, subclass, 0, _RET_IP_); + count = (unsigned int)atomic_read(&vma->vm_lock.count); + for (;;) { + /* We are holding mmap_lock, no active or waiting writers are possible. */ + VM_BUG_ON_VMA(count & (VMA_LOCK_WR_LOCKED | VMA_LOCK_WR_WAIT), vma); + new = count + 1; + /* Unlikely but if atomic_t overflows, wrap around to. */ + if (WARN_ON(new & (VMA_LOCK_WR_LOCKED | VMA_LOCK_WR_WAIT))) + new = 0; + /* + * Atomic RMW will provide implicit mb on success to pair with smp_wmb in + * vma_start_write, on failure we retry. + */ + prev = atomic_cmpxchg(&vma->vm_lock.count, count, new); + if (prev == count) + break; + count = prev; + } + lock_acquired(&vma->vm_lock.dep_map, _RET_IP_); } /* @@ -743,14 +826,15 @@ static inline void vma_start_read_locked_nested(struct vm_area_struct *vma, int */ static inline void vma_start_read_locked(struct vm_area_struct *vma) { - mmap_assert_locked(vma->vm_mm); - down_read(&vma->vm_lock.lock); + vma_start_read_locked_nested(vma, 0); } static inline void vma_end_read(struct vm_area_struct *vma) { + struct mm_struct *mm = vma->vm_mm; + rcu_read_lock(); /* keeps vma alive till the end of up_read */ - up_read(&vma->vm_lock.lock); + __vma_end_read(mm, vma); rcu_read_unlock(); } @@ -774,12 +858,34 @@ static bool __is_vma_write_locked(struct vm_area_struct *vma, unsigned int *mm_l */ static inline void vma_start_write(struct vm_area_struct *vma) { + unsigned int count, prev, new; unsigned int mm_lock_seq; + might_sleep(); if (__is_vma_write_locked(vma, &mm_lock_seq)) return; - down_write(&vma->vm_lock.lock); + rwsem_acquire(&vma->vm_lock.dep_map, 0, 0, _RET_IP_); + count = (unsigned int)atomic_read(&vma->vm_lock.count); + for (;;) { + if (vma_lock_reader_count(count) > 0) + new = count | VMA_LOCK_WR_WAIT; + else + new = count | VMA_LOCK_WR_LOCKED; + prev = atomic_cmpxchg(&vma->vm_lock.count, count, new); + if (prev == count) + break; + count = prev; + } + if (new & VMA_LOCK_WR_WAIT) { + lock_contended(&vma->vm_lock.dep_map, _RET_IP_); + wait_event(vma->vm_mm->vma_writer_wait, + atomic_cmpxchg(&vma->vm_lock.count, VMA_LOCK_WR_WAIT, + VMA_LOCK_WR_LOCKED) == VMA_LOCK_WR_WAIT); + + } + lock_acquired(&vma->vm_lock.dep_map, _RET_IP_); + /* * We should use WRITE_ONCE() here because we can have concurrent reads * from the early lockless pessimistic check in vma_start_read(). @@ -787,7 +893,10 @@ static inline void vma_start_write(struct vm_area_struct *vma) * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. */ WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); - up_write(&vma->vm_lock.lock); + /* Write barrier to ensure vm_lock_seq change is visible before count */ + smp_wmb(); + rwsem_release(&vma->vm_lock.dep_map, _RET_IP_); + atomic_set(&vma->vm_lock.count, VMA_LOCK_UNLOCKED); } static inline void vma_assert_write_locked(struct vm_area_struct *vma) @@ -797,9 +906,14 @@ static inline void vma_assert_write_locked(struct vm_area_struct *vma) VM_BUG_ON_VMA(!__is_vma_write_locked(vma, &mm_lock_seq), vma); } +static inline bool is_vma_read_locked(struct vm_area_struct *vma) +{ + return vma_lock_reader_count((unsigned int)atomic_read(&vma->vm_lock.count)) > 0; +} + static inline void vma_assert_locked(struct vm_area_struct *vma) { - if (!rwsem_is_locked(&vma->vm_lock.lock)) + if (!is_vma_read_locked(vma)) vma_assert_write_locked(vma); } diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5c4bfdcfac72..789bccc05520 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -615,8 +615,23 @@ static inline struct anon_vma_name *anon_vma_name_alloc(const char *name) } #endif +#define VMA_LOCK_UNLOCKED 0 +#define VMA_LOCK_WR_LOCKED (1 << 31) +#define VMA_LOCK_WR_WAIT (1 << 30) + +#define VMA_LOCK_RD_COUNT_MASK (VMA_LOCK_WR_WAIT - 1) + struct vma_lock { - struct rw_semaphore lock; + /* + * count & VMA_LOCK_RD_COUNT_MASK > 0 ==> read-locked with 'count' number of readers + * count & VMA_LOCK_WR_LOCKED != 0 ==> write-locked + * count & VMA_LOCK_WR_WAIT != 0 ==> writer is waiting + * count = 0 ==> unlocked + */ + atomic_t count; +#ifdef CONFIG_DEBUG_LOCK_ALLOC + struct lockdep_map dep_map; +#endif }; struct vma_numab_state { @@ -883,6 +898,7 @@ struct mm_struct { * by mmlist_lock */ #ifdef CONFIG_PER_VMA_LOCK + struct wait_queue_head vma_writer_wait; /* * This field has lock-like semantics, meaning it is sometimes * accessed with ACQUIRE/RELEASE semantics. diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index 58dde2e35f7e..769ab97fff3e 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -121,6 +121,9 @@ static inline void mmap_init_lock(struct mm_struct *mm) { init_rwsem(&mm->mmap_lock); mm_lock_seqcount_init(mm); +#ifdef CONFIG_PER_VMA_LOCK + init_waitqueue_head(&mm->vma_writer_wait); +#endif } static inline void mmap_write_lock(struct mm_struct *mm) diff --git a/kernel/fork.c b/kernel/fork.c index 9e504105f24f..726050c557e2 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -486,7 +486,7 @@ static void vm_area_free_rcu_cb(struct rcu_head *head) vm_rcu); /* The vma should not be locked while being destroyed. */ - VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock.lock), vma); + VM_BUG_ON_VMA(is_vma_read_locked(vma), vma); __vm_area_free(vma); } #endif diff --git a/mm/init-mm.c b/mm/init-mm.c index 6af3ad675930..db058873ba18 100644 --- a/mm/init-mm.c +++ b/mm/init-mm.c @@ -40,6 +40,8 @@ struct mm_struct init_mm = { .arg_lock = __SPIN_LOCK_UNLOCKED(init_mm.arg_lock), .mmlist = LIST_HEAD_INIT(init_mm.mmlist), #ifdef CONFIG_PER_VMA_LOCK + .vma_writer_wait = + __WAIT_QUEUE_HEAD_INITIALIZER(init_mm.vma_writer_wait), .mm_lock_seq = SEQCNT_ZERO(init_mm.mm_lock_seq), #endif .user_ns = &init_user_ns,