From patchwork Thu Oct 24 20:52:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13849703 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D50F3D1036C for ; Thu, 24 Oct 2024 20:52:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 64CC86B0085; Thu, 24 Oct 2024 16:52:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5D5026B0088; Thu, 24 Oct 2024 16:52:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 44E166B008A; Thu, 24 Oct 2024 16:52:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 248FF6B0085 for ; Thu, 24 Oct 2024 16:52:38 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5FE5F161500 for ; Thu, 24 Oct 2024 20:52:16 +0000 (UTC) X-FDA: 82709693934.05.5C6D6E0 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf22.hostedemail.com (Postfix) with ESMTP id B5997C001F for ; Thu, 24 Oct 2024 20:52:12 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="Rf42qF/V"; spf=pass (imf22.hostedemail.com: domain of 3krMaZwYKCKMVXUHQEJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3krMaZwYKCKMVXUHQEJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729803078; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=sIvMqrIAkqxwNJ/YN2SAKUMTqQSL0GgUejEt0+51yMQ=; b=yJTLwiVIUkGPMKkKQK1QH2uS/wYbPxJyLAYHfm8zqGNz8CEnjmHRfgEhKXg90BYOfzz/ag WsVBnDRt4B2y6fXCYwEskGa/S/XM/1uDY8xavJvkbiAtotVOyDIcbdi8g7dWn+7TYMuf/u m0YPWBD3Emb+VEsne6c695ZDJPxSvrc= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="Rf42qF/V"; spf=pass (imf22.hostedemail.com: domain of 3krMaZwYKCKMVXUHQEJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3krMaZwYKCKMVXUHQEJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729803078; a=rsa-sha256; cv=none; b=X7F6mPaeq/2jgC/0aY4Lep/OeOlOvX+RVg0AK/1miOLKZC4yqgbV6x34x2PoKW3MoW4H9I ia89b16apowuPbXG3VJgY6fQJe+90IhpUSS8jC6Uj6TrewvS/GJmPIcc9Cm1nlitOXBJjO dkcKoNF4r//Hs00zdCbC0YfeLNuYX9I= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6e376aa4586so25724157b3.1 for ; Thu, 24 Oct 2024 13:52:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1729803155; x=1730407955; darn=kvack.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=sIvMqrIAkqxwNJ/YN2SAKUMTqQSL0GgUejEt0+51yMQ=; b=Rf42qF/VYnfqcVmVg6Q1Kj8BL/C6cONHcUi4Mh6qS6imRBJpr+xntCMTO2rVWp4I63 TMJCz1r1e80aRDSl6XHDEpuqxcOrus4HLBMmNZ8Mq8lDzkYzEEz9BsihBt+buZ50qaqj zZuUS+aLvs03IW06Sm5WjkHAeHVFiWHfUuhFy51jSgCU6RxuK2TtyCZlGrWSyOOXaP8m K9WwER4LHkPHChaLxXVl5yh1ApnukrTgqb4eGGl1v0AD+MkW9qUFQQbqb9m/92mmllST rom7PgS0qn4hE0yT813U26JJ87eguf6F2z23HKviVrik9k4fd5vT9P5JPMJrs/Yr/SRD 06iQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729803155; x=1730407955; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=sIvMqrIAkqxwNJ/YN2SAKUMTqQSL0GgUejEt0+51yMQ=; b=i+1RNZOFdVKrTCu9F5ku+db8SJLoBA0/rewSBntqb1f7lK6Ql0DRAPP9CsUaeK/4z+ 36O4AoBp4w/7xyfz12gkmnC/toMrE1ZDd9StdwjP5Z0F1UsNxA4s6s83lsRZuGPc4gSa x6scfDGfbBpG6TL+mTGQMBzgj0KugKAL1UFoFu+Ev60fxjCwPUN1Iyzh0SoA1XlUm57T mr0pqVKMMrCPl7p/EpYSJT6U4bGmC1oSW81rBbJbFr1cd2aa+SeN/sH8hQNd/MIQiNW5 PrqK86/KVUfYS7YixPR2FhjO8wk2BToiFlhdqynwVMx+9HVD14Mjb40NWJ9NipR2jRjc gcIQ== X-Forwarded-Encrypted: i=1; AJvYcCWUL/pucIRy6ZOnEyAuoN3AQP8XVposdizQWUKRoXdxW8UCTlvbxpxQpH1rcTOfSI790wwPsBoQsg==@kvack.org X-Gm-Message-State: AOJu0Yy7bxig2iGUG6xPCdqhJ3nyINdVC0loY6RDBi3alK6yzKTBFnVl 1AcXPmzub1/oreN0CBvA9sLocDLuLZawW4PADRIe+BeF20AczDgsHgETYaBFelfNgnwJPNNaee0 BqQ== X-Google-Smtp-Source: AGHT+IGwO1wQ/PmUjhIuouna2MjV7/4C20etjLZzRMe0ybJvx0T+E1qilfBePaekfhpbeHk4DBkPtLiBOJE= X-Received: from surenb-desktop.mtv.corp.google.com ([2a00:79e0:2e3f:8:47e4:54e:bc8e:cb18]) (user=surenb job=sendgmr) by 2002:a05:690c:6501:b0:6e3:b93:3ae2 with SMTP id 00721157ae682-6e858158d33mr963917b3.1.1729803154840; Thu, 24 Oct 2024 13:52:34 -0700 (PDT) Date: Thu, 24 Oct 2024 13:52:30 -0700 Mime-Version: 1.0 X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241024205231.1944747-1-surenb@google.com> Subject: [PATCH 1/2] mm: convert mm_lock_seq to a proper seqcount From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, andrii@kernel.org, jannh@google.com, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, vbabka@suse.cz, mhocko@kernel.org, shakeel.butt@linux.dev, hannes@cmpxchg.org, david@redhat.com, willy@infradead.org, brauner@kernel.org, oleg@redhat.com, arnd@arndb.de, richard.weiyang@gmail.com, zhangpeng.00@bytedance.com, linmiaohe@huawei.com, viro@zeniv.linux.org.uk, hca@linux.ibm.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, surenb@google.com X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: B5997C001F X-Stat-Signature: 1ziasus1d414cwb1udetsm49wb7cbmzd X-HE-Tag: 1729803132-159903 X-HE-Meta: U2FsdGVkX1/TYlhP9QEKBRJhAPOO5/UNccbIqIxf8UyC8Ts6qvlgua6VmPDBrQiJt2iuk+2UjKnSaE+lG8iSE7Qk4zqS2El2VECr2GhZbLarn5uiS7GkweZr/IvvsUKVNc43YxLpteH+h01EZx515htPeJ+0absPh+/tMYLP+ZkoUyNNgqdvQEsXyR1TEkmCaCGvfPJpIht/MFMIq5UxhGru5nmdjzSOp4EWHkZTQZAyvIh5EPP+6ZcUZN1QmzhLVO66/raDpD/7cseIZJ4sh3tZQMdljrk3EZCYOc79NeLGERwfbeKabY7e6OpGYZAdTz0sx3xI3MfRMxQAKRCpZeMUJ8g01gNxH6G7kCzTBMvldfDWgV8u5xkfkHDvN/8t5oWhuREWWzLLQwqVsCeAErqqjnIN/ZAAOXWPqh+YAuPo4x8bhb/pJjsu9brw1JWFDHk+D3pfHw9hpwXDknJRF4qrRgrllo2PFvjZW6OmkH7AC28r9pcd2CCn9s1uYb7Pn1lSgpOfPzZiy6oBtOXSWW5UxDpmzVBrpKqgfdMh+nDVMlRtXADc7/gvYfnBihRd2BARpy/LQ8TAzM7vFsMBFyuc1CB2KKV1VmuR6FBo2W+h7udRa45SqaC0l7+Ffh/buz93hNreZNArRsTuJdbpNgJdiIfSWAqPe1ufrMAtqeUfQfsLx7+ASVOia6jT9NdLK+tsvUaI4cSAL74pYaz+ZEDgZsZSPaN5lqg0aPr8HsI5ngCqMDrbhOadfpldr6H0fQ+egNdqlP3i3e+DcyUrZe4/LHk1yulLnrxRYBUvSBNKClq4T86x4ogZhEHGp/CmNQztztp3+QVAISiU6LGEdXvHYl3en4ZfODae5G+NQ6XQCnlC++4c4wKRT+1LnMmAkUqEVYBIR8EPU4QGwjYF6WuRgUDjmQw0P7F1EFBiKT/7t1Hud1ehFbgIY+rLzW5IvP1jB+p1jm+lFCJl5Ci 9BfA4V70 XD/H6zgS1D2rnIscrijXRfKjBl1jEvoWwntk8dd+OLUtGL8Bbn6T31cW6yUCVmvloZXCrrZ97UWauMUbGeXUlb2OW50JPYNhaC5lJTWqi6Cx0MOEvrDV5OToHnK46PfSkKqkUDLaHyE9tN1QxwMkh/L4/gM7Ly1a6+FNXNgYMQZ51k7Iqs8SkOXhn/W9TNM3ci/207lTjzIAYnGTwK2HIF5sKCX6Tc9aJ4wHvRn9A98OIYpMMuEO2X/Yn466J6sGohv2pbSDJ6038d5lUcf8i5JBSPp7Ef5znC0C3I1lyk98w200TWsWRzofdAvArV4BBS0LdTArt9ozAFI5PfM0BOWpEKBGth98Y34cSy/28UBKqnjSmMhZJ9LqOU66iS5/IMsA+ZMpfFOyL7jYmlG5AAnKUOITcP3G+eq1TtlEaC4VjadzGDJ2ZCvUYTNHZlivAExS5ovTbYD+SA/Q/tYiBLGpUAemMNASpotV1FMIyiaKcAu0UbPQMx7s+a3H8y8NYl7IYwaK7Gs3odxIkyrva57es9nRrkqQhwj1fehzbF9uKXJ3czsVoxQZCDcc7LWMbz9ju8xun0z0JbYeEMZsNWPOWmdIsGlXPbEJ8lAy0XEYB3eBY2gH3mw5Ject7gkpCe3zdCymqzVAlbhD6HCrj/Wf4QUGQGgIbVcGJNK1Y/Xt4N/AHDbdGudsSVcQk6WaOXggeNII+XuuSv0Oy9er38xsvhEn3JgdYSc3WhKrB44/AmUE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Convert mm_lock_seq to be seqcount_t and change all mmap_write_lock variants to increment it, in-line with the usual seqcount usage pattern. This lets us check whether the mmap_lock is write-locked by checking mm_lock_seq.sequence counter (odd=locked, even=unlocked). This will be used when implementing mmap_lock speculation functions. As a result vm_lock_seq is also change to be unsigned to match the type of mm_lock_seq.sequence. Suggested-by: Peter Zijlstra Signed-off-by: Suren Baghdasaryan --- Applies over mm-unstable This conversion was discussed at [1] and these patches will likely be incorporated into the next version od Andrii's patcset. The issue of the seqcount_t.sequence being an unsigned rather than unsigned long will be addressed separately in collaoration with Jann Horn. [1] https://lore.kernel.org/all/20241010205644.3831427-2-andrii@kernel.org/ include/linux/mm.h | 12 +++---- include/linux/mm_types.h | 7 ++-- include/linux/mmap_lock.h | 58 +++++++++++++++++++++----------- kernel/fork.c | 5 +-- mm/init-mm.c | 2 +- tools/testing/vma/vma.c | 4 +-- tools/testing/vma/vma_internal.h | 4 +-- 7 files changed, 56 insertions(+), 36 deletions(-) base-commit: 9c111059234a949a4d3442a413ade19cc65ab927 diff --git a/include/linux/mm.h b/include/linux/mm.h index 4ef8cf1043f1..77644118b200 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -698,7 +698,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * we don't rely on for anything - the mm_lock_seq read against which we * need ordering is below. */ - if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq)) + if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence)) return false; if (unlikely(down_read_trylock(&vma->vm_lock->lock) == 0)) @@ -715,7 +715,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * after it has been unlocked. * This pairs with RELEASE semantics in vma_end_write_all(). */ - if (unlikely(vma->vm_lock_seq == smp_load_acquire(&vma->vm_mm->mm_lock_seq))) { + if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { up_read(&vma->vm_lock->lock); return false; } @@ -730,7 +730,7 @@ static inline void vma_end_read(struct vm_area_struct *vma) } /* WARNING! Can only be used if mmap_lock is expected to be write-locked */ -static bool __is_vma_write_locked(struct vm_area_struct *vma, int *mm_lock_seq) +static bool __is_vma_write_locked(struct vm_area_struct *vma, unsigned int *mm_lock_seq) { mmap_assert_write_locked(vma->vm_mm); @@ -738,7 +738,7 @@ static bool __is_vma_write_locked(struct vm_area_struct *vma, int *mm_lock_seq) * current task is holding mmap_write_lock, both vma->vm_lock_seq and * mm->mm_lock_seq can't be concurrently modified. */ - *mm_lock_seq = vma->vm_mm->mm_lock_seq; + *mm_lock_seq = vma->vm_mm->mm_lock_seq.sequence; return (vma->vm_lock_seq == *mm_lock_seq); } @@ -749,7 +749,7 @@ static bool __is_vma_write_locked(struct vm_area_struct *vma, int *mm_lock_seq) */ static inline void vma_start_write(struct vm_area_struct *vma) { - int mm_lock_seq; + unsigned int mm_lock_seq; if (__is_vma_write_locked(vma, &mm_lock_seq)) return; @@ -767,7 +767,7 @@ static inline void vma_start_write(struct vm_area_struct *vma) static inline void vma_assert_write_locked(struct vm_area_struct *vma) { - int mm_lock_seq; + unsigned int mm_lock_seq; VM_BUG_ON_VMA(!__is_vma_write_locked(vma, &mm_lock_seq), vma); } diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index ff8627acbaa7..80fef38d9d64 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -715,7 +715,7 @@ struct vm_area_struct { * counter reuse can only lead to occasional unnecessary use of the * slowpath. */ - int vm_lock_seq; + unsigned int vm_lock_seq; /* Unstable RCU readers are allowed to read this. */ struct vma_lock *vm_lock; #endif @@ -887,6 +887,9 @@ struct mm_struct { * Roughly speaking, incrementing the sequence number is * equivalent to releasing locks on VMAs; reading the sequence * number can be part of taking a read lock on a VMA. + * Incremented every time mmap_lock is write-locked/unlocked. + * Initialized to 0, therefore odd values indicate mmap_lock + * is write-locked and even values that it's released. * * Can be modified under write mmap_lock using RELEASE * semantics. @@ -895,7 +898,7 @@ struct mm_struct { * Can be read with ACQUIRE semantics if not holding write * mmap_lock. */ - int mm_lock_seq; + seqcount_t mm_lock_seq; #endif diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index de9dc20b01ba..6b3272686860 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -71,39 +71,38 @@ static inline void mmap_assert_write_locked(const struct mm_struct *mm) } #ifdef CONFIG_PER_VMA_LOCK -/* - * Drop all currently-held per-VMA locks. - * This is called from the mmap_lock implementation directly before releasing - * a write-locked mmap_lock (or downgrading it to read-locked). - * This should normally NOT be called manually from other places. - * If you want to call this manually anyway, keep in mind that this will release - * *all* VMA write locks, including ones from further up the stack. - */ -static inline void vma_end_write_all(struct mm_struct *mm) +static inline void mm_lock_seqcount_init(struct mm_struct *mm) { - mmap_assert_write_locked(mm); - /* - * Nobody can concurrently modify mm->mm_lock_seq due to exclusive - * mmap_lock being held. - * We need RELEASE semantics here to ensure that preceding stores into - * the VMA take effect before we unlock it with this store. - * Pairs with ACQUIRE semantics in vma_start_read(). - */ - smp_store_release(&mm->mm_lock_seq, mm->mm_lock_seq + 1); + seqcount_init(&mm->mm_lock_seq); +} + +static inline void mm_lock_seqcount_begin(struct mm_struct *mm) +{ + do_raw_write_seqcount_begin(&mm->mm_lock_seq); +} + +static inline void mm_lock_seqcount_end(struct mm_struct *mm) +{ + do_raw_write_seqcount_end(&mm->mm_lock_seq); } + #else -static inline void vma_end_write_all(struct mm_struct *mm) {} +static inline void mm_lock_seqcount_init(struct mm_struct *mm) {} +static inline void mm_lock_seqcount_begin(struct mm_struct *mm) {} +static inline void mm_lock_seqcount_end(struct mm_struct *mm) {} #endif static inline void mmap_init_lock(struct mm_struct *mm) { init_rwsem(&mm->mmap_lock); + mm_lock_seqcount_init(mm); } static inline void mmap_write_lock(struct mm_struct *mm) { __mmap_lock_trace_start_locking(mm, true); down_write(&mm->mmap_lock); + mm_lock_seqcount_begin(mm); __mmap_lock_trace_acquire_returned(mm, true, true); } @@ -111,6 +110,7 @@ static inline void mmap_write_lock_nested(struct mm_struct *mm, int subclass) { __mmap_lock_trace_start_locking(mm, true); down_write_nested(&mm->mmap_lock, subclass); + mm_lock_seqcount_begin(mm); __mmap_lock_trace_acquire_returned(mm, true, true); } @@ -120,10 +120,30 @@ static inline int mmap_write_lock_killable(struct mm_struct *mm) __mmap_lock_trace_start_locking(mm, true); ret = down_write_killable(&mm->mmap_lock); + if (!ret) + mm_lock_seqcount_begin(mm); __mmap_lock_trace_acquire_returned(mm, true, ret == 0); return ret; } +/* + * Drop all currently-held per-VMA locks. + * This is called from the mmap_lock implementation directly before releasing + * a write-locked mmap_lock (or downgrading it to read-locked). + * This should normally NOT be called manually from other places. + * If you want to call this manually anyway, keep in mind that this will release + * *all* VMA write locks, including ones from further up the stack. + */ +static inline void vma_end_write_all(struct mm_struct *mm) +{ + mmap_assert_write_locked(mm); + /* + * Nobody can concurrently modify mm->mm_lock_seq due to exclusive + * mmap_lock being held. + */ + mm_lock_seqcount_end(mm); +} + static inline void mmap_write_unlock(struct mm_struct *mm) { __mmap_lock_trace_released(mm, true); diff --git a/kernel/fork.c b/kernel/fork.c index fd528fb5e305..0cae6fc651f0 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -447,7 +447,7 @@ static bool vma_lock_alloc(struct vm_area_struct *vma) return false; init_rwsem(&vma->vm_lock->lock); - vma->vm_lock_seq = -1; + vma->vm_lock_seq = UINT_MAX; return true; } @@ -1260,9 +1260,6 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, seqcount_init(&mm->write_protect_seq); mmap_init_lock(mm); INIT_LIST_HEAD(&mm->mmlist); -#ifdef CONFIG_PER_VMA_LOCK - mm->mm_lock_seq = 0; -#endif mm_pgtables_bytes_init(mm); mm->map_count = 0; mm->locked_vm = 0; diff --git a/mm/init-mm.c b/mm/init-mm.c index 24c809379274..6af3ad675930 100644 --- a/mm/init-mm.c +++ b/mm/init-mm.c @@ -40,7 +40,7 @@ struct mm_struct init_mm = { .arg_lock = __SPIN_LOCK_UNLOCKED(init_mm.arg_lock), .mmlist = LIST_HEAD_INIT(init_mm.mmlist), #ifdef CONFIG_PER_VMA_LOCK - .mm_lock_seq = 0, + .mm_lock_seq = SEQCNT_ZERO(init_mm.mm_lock_seq), #endif .user_ns = &init_user_ns, .cpu_bitmap = CPU_BITS_NONE, diff --git a/tools/testing/vma/vma.c b/tools/testing/vma/vma.c index 8fab5e13c7c3..9bcf1736bf18 100644 --- a/tools/testing/vma/vma.c +++ b/tools/testing/vma/vma.c @@ -89,7 +89,7 @@ static struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm, * begun. Linking to the tree will have caused this to be incremented, * which means we will get a false positive otherwise. */ - vma->vm_lock_seq = -1; + vma->vm_lock_seq = UINT_MAX; return vma; } @@ -214,7 +214,7 @@ static bool vma_write_started(struct vm_area_struct *vma) int seq = vma->vm_lock_seq; /* We reset after each check. */ - vma->vm_lock_seq = -1; + vma->vm_lock_seq = UINT_MAX; /* The vma_start_write() stub simply increments this value. */ return seq > -1; diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index e76ff579e1fd..1d9fc97b8e80 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -241,7 +241,7 @@ struct vm_area_struct { * counter reuse can only lead to occasional unnecessary use of the * slowpath. */ - int vm_lock_seq; + unsigned int vm_lock_seq; struct vma_lock *vm_lock; #endif @@ -416,7 +416,7 @@ static inline bool vma_lock_alloc(struct vm_area_struct *vma) return false; init_rwsem(&vma->vm_lock->lock); - vma->vm_lock_seq = -1; + vma->vm_lock_seq = UINT_MAX; return true; }