From patchwork Sat Jan 11 04:26:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13935782 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82F59E7719D for ; Sat, 11 Jan 2025 04:26:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9AE296B00AA; Fri, 10 Jan 2025 23:26:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 93B616B00AB; Fri, 10 Jan 2025 23:26:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 78B596B00AC; Fri, 10 Jan 2025 23:26:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 53AD76B00AA for ; Fri, 10 Jan 2025 23:26:45 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id CF32C1C7D8D for ; Sat, 11 Jan 2025 04:26:44 +0000 (UTC) X-FDA: 82993885128.20.46106D7 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf20.hostedemail.com (Postfix) with ESMTP id 09DC51C000A for ; Sat, 11 Jan 2025 04:26:42 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=WbDTHtIe; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf20.hostedemail.com: domain of 3AfOBZwYKCDIgifSbPUccUZS.QcaZWbil-aaYjOQY.cfU@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3AfOBZwYKCDIgifSbPUccUZS.QcaZWbil-aaYjOQY.cfU@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736569603; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VRTmBS/RO6y0myDTT8TgSTc028MD502e4G83fytWfN8=; b=n0HEufEYimuHumayj80AO3WrDzz6Jui2ZKLUcyEYzmv6ZMvflGyn6DvjqG/94Qpeo8cIPD Hl15oKLIDArX/geYMvShM8EbTq+PZIZlzLkA5vrv+hc/ZDCtFN8TfwRhkf7oFk4bve0E5O i6wF4nIAPDyQeno5ryOXn0HFLNe3QZU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736569603; a=rsa-sha256; cv=none; b=VN7xcIXS1K0aof3XP1zwqYYPpOmajuVtfw+6OE2sIY2i63EsPmyvsJKXw5wcI2t7LIw9OB L8SYdAUBfgV+iMaLEuzliWNr0LIOsZItFo5XAFV/mfB9u5dh+y6QF0z/O/ksWEXwSGUNUb JDEOdco2peGDZrc9cUswIUdngfT7N7s= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=WbDTHtIe; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf20.hostedemail.com: domain of 3AfOBZwYKCDIgifSbPUccUZS.QcaZWbil-aaYjOQY.cfU@flex--surenb.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3AfOBZwYKCDIgifSbPUccUZS.QcaZWbil-aaYjOQY.cfU@flex--surenb.bounces.google.com Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-218cf85639eso70606195ad.3 for ; Fri, 10 Jan 2025 20:26:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736569602; x=1737174402; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VRTmBS/RO6y0myDTT8TgSTc028MD502e4G83fytWfN8=; b=WbDTHtIeXYCDFjutm/lXYk3pJ8rMi5Vo50/Mpq2d1tIIDMxZqhy4Uo2su24ubFxJlm MjkLd93gzRO0ifZUkQLPeATpO9lEbqsqwWqT3CHSk6QzGPzpEoH30KLaqz1uEMTb4HcZ DEjZPHK9/qDDT12gKVkIsJPE45oia7AkHD5+ZAytxqCAG5Yn2CpyCkVAbgCd43w4Tun+ GG3xSC+TDGjcOcUE/l17VbUSCBOPJB/NxPDiTORvyxje91F8DNUTVuLpGaXIv5nCfDFk fvoO/y4cas11RI7lnKaMEJjFb9+FAl9I6bSjAa0BbBoTa4YSDGNJMsKrvDmJ4omEk6O7 RApw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736569602; x=1737174402; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VRTmBS/RO6y0myDTT8TgSTc028MD502e4G83fytWfN8=; b=XoKPljcZPLo8/vu2GjuTgbk+3Oyq4K4wKvAXOfqAjbXnNf6W+NE/fHYjUYN0/NpqjM TnHN9ESPIw6RyWRf3HhQbupWch5rBhWGVrv1eNjQIhQyP2p0rvqv3jncuGWyHpMzJChz oJBksJXA6ZC2FM2KQTyuybT2A9L/7aqCV6ufEMDzRSfRgCNPuaNLdVVwYop/255XvfSY z+iSqLZsG0NmrIeWa2M+l6WHhiw/PBMwLtL3h2kCQ6qLDMsLBA26n6GimagASZNGYlFr 9edvrLFQnyO9Qhqf7HhFE433K8924x+2/SvqBogiBugAxud370jlQsANCG/Pls0W8HNL LUfQ== X-Forwarded-Encrypted: i=1; AJvYcCU+Z7JTs+7pXHZcafeVqGXXi2UettXfaoQKc/YgHqPrNjTObWSJAoH6N2LIvlh5CBNIHW2mFd5WdA==@kvack.org X-Gm-Message-State: AOJu0YzDkUFHbNCv8Ytn+w+1cPe5lfF4k23wPFoDNR0TjP7duLbG8ypR EozKfvsgJxdQHdgjCITDq8lP6xpp42GJ8SSDp5nR2IRiS/ecFcC6f04/L8sxvBzjB0tC7ytrks4 B6Q== X-Google-Smtp-Source: AGHT+IGvE8YVAHPC8Xc9V5y9NTovVckrMEjQEriOsrbb9o0xtzvcfxg/meKcTLOv1CFtxoMOk8jWgbTqBhU= X-Received: from pguy5.prod.google.com ([2002:a65:6c05:0:b0:7fd:4075:406d]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:841c:b0:1e1:a9dd:5a58 with SMTP id adf61e73a8af0-1e88d0b6d05mr23322693637.30.1736569601794; Fri, 10 Jan 2025 20:26:41 -0800 (PST) Date: Fri, 10 Jan 2025 20:26:03 -0800 In-Reply-To: <20250111042604.3230628-1-surenb@google.com> Mime-Version: 1.0 References: <20250111042604.3230628-1-surenb@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250111042604.3230628-17-surenb@google.com> Subject: [PATCH v9 16/17] mm: make vma cache SLAB_TYPESAFE_BY_RCU From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, david.laight.linux@gmail.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Stat-Signature: rcdfpmz38pwk57x9zfhwzkhga6x5nojs X-Rspamd-Queue-Id: 09DC51C000A X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1736569602-212574 X-HE-Meta: U2FsdGVkX18K2IRpYVRi9k72ub/Pq2QAI7qjD2ytqlEax2cK8/b4IzIjr05vs65dBPNYsWyVMZA+nhCEGKRjz85MJ/H4wRWkeZa4JBgsfOd4BCz9L/715TU7PZVPzThA8wadgS1XORY3bv3BR395t8vpPWQ6niVH3wyhvy2hJGP50T17WdSPThpWBtYkd46UyfbrYd59IAb03bML+gJcMl1+nmhakf+0t2Mi8KMvYw3z/PDT0PexMS5ey+EZioYCFTNku0BZ4RRcFl0VSJAXkjGU1RevDFB9vK7GhFMIVcgYQVXN9NyC6ZkN1Xt1hliOxl194mKjBilK6HIMrymXQZKCCPW3LFMSkv0q59JnjpEeetO2QMf80j3aJWHA4ylFkBVNsFxMTVvsDOk8ATNQ+8TsUgwk1jlVRlbDdJJGrDnF3GSiB9/n+hPasNDXSLtMDLQB9CE0b+zTUQ3weXjcdoSB44NXjcg8tVYK5TM3TOf/ecBq9P0ELY17tdgenPj2dQaPDCJrTI1FiYVyg0iSViD9ImjheClWXZ/34/jkLWAiC4hKaGjE2t92n5YnkijpwilvcOO7NRVWnGzDoRd/4ILqrAV7k3zUMudbgRlSrpxwao56AQCuMzWMTHejaV/UlMOMj9rTE9kO1pTfW+q0ZH+N1cOYU/lFcmc4OnZqjFZ8gA9FEOudxJfhByc4VPkI9346YJmuIMg4E1JHCLmMifd2zOGKJF63DmTTn/8lcMw9mnyBD+1xnXOORxG4+JIDJqDMRholgU9gdRwQckn9aHf3B1IbW5FL+4eAPlga07D7ls+ZtN89+yxiaeTzd12LMzQK6NwHsHkyvMryI/ASS56ovVVhregkhqiWJdGnlQy7w/8Qy0jtVwH9IoPmnUhZ5nzGSwBMaXhyqD/EYFKm5hHqXD6xCsF85BLC5uIyM8JZT5WVxg5JoO1YnwKvh/mYaoEm4Eathxwrhd6My9n U78cEV/x lEySXkCS/AJQmHw3VtUbhdFLrIlOay3bUFC9AhldPAkS0+rd/In4bCxNHBdI6y58fG2BhPaShG3Vh/7NPSRQf+2Bx/jfUB9IpmTxVTxlpBKSE4/d0+g6g8XhKswXvsWktkgb+EHEvjk7XUBkxmS7WjSHxsa1MTZnT4R5/mnFPSZSDjFpuwfUrj3dLzppYdDCGK/eyZUnBgO5CL8WWhMOmpI4175m626dyO0DL/iRaUOPH1OLnTVFWeum3M2q8JUH2Dvd0OOekcrf6cLEJPs41sjnMaX5yh6GulXSIzHQfow88vf3Vs0C0Kd77GMu30UnpVy8rLqieJd2EX+P13TLfrqUpOE8dF7TTtdr6ges927/nCHDsyH9RtTmZ+fFpTnQBHySUfhiAdbKhdBaBKxfJbAljjsfvwxiRk5Jka1glAVb/IUJjt2XbY+0K+U+s0Ci+L0RxhkD0+A+HeUPGBC+pBV3mMlMf/YO/S8UR91T0c2x+aj+dPvKnKqiHIdNBFuPVNMSeXWRYA7qoUK6v9WkGBIOlDh0h6M9xOiQ0WtB9g38PZwlYF5an5gllZS2u6wI0fu3XgT8uXYXJ7V54hdMe/0cHt4C4YpZkopp6MfjuQPC88wgOF3UqacGvq5PPbg0YoJWHnUC5ADDVJdorq0/pjoFgZ1+kYjW4fPmc X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To enable SLAB_TYPESAFE_BY_RCU for vma cache we need to ensure that object reuse before RCU grace period is over will be detected by lock_vma_under_rcu(). Current checks are sufficient as long as vma is detached before it is freed. The only place this is not currently happening is in exit_mmap(). Add the missing vma_mark_detached() in exit_mmap(). Another issue which might trick lock_vma_under_rcu() during vma reuse is vm_area_dup(), which copies the entire content of the vma into a new one, overriding new vma's vm_refcnt and temporarily making it appear as attached. This might trick a racing lock_vma_under_rcu() to operate on a reused vma if it found the vma before it got reused. To prevent this situation, we should ensure that vm_refcnt stays at detached state (0) when it is copied and advances to attached state only after it is added into the vma tree. Introduce vm_area_init_from() which preserves new vma's vm_refcnt and use it in vm_area_dup(). Since all vmas are in detached state with no current readers when they are freed, lock_vma_under_rcu() will not be able to take vm_refcnt after vma got detached even if vma is reused. Finally, make vm_area_cachep SLAB_TYPESAFE_BY_RCU. This will facilitate vm_area_struct reuse and will minimize the number of call_rcu() calls. Signed-off-by: Suren Baghdasaryan Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 2 - include/linux/mm_types.h | 13 ++++-- include/linux/slab.h | 6 --- kernel/fork.c | 73 ++++++++++++++++++++------------ mm/mmap.c | 3 +- mm/vma.c | 11 ++--- mm/vma.h | 2 +- tools/testing/vma/vma_internal.h | 7 +-- 8 files changed, 63 insertions(+), 54 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index cb29eb7360c5..ac78425e9838 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -258,8 +258,6 @@ void setup_initial_init_mm(void *start_code, void *end_code, struct vm_area_struct *vm_area_alloc(struct mm_struct *); struct vm_area_struct *vm_area_dup(struct vm_area_struct *); void vm_area_free(struct vm_area_struct *); -/* Use only if VMA has no other users */ -void __vm_area_free(struct vm_area_struct *vma); #ifndef CONFIG_MMU extern struct rb_root nommu_region_tree; diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index d902e6730654..d366ec6302e6 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -574,6 +574,12 @@ static inline void *folio_get_private(struct folio *folio) typedef unsigned long vm_flags_t; +/* + * freeptr_t represents a SLUB freelist pointer, which might be encoded + * and not dereferenceable if CONFIG_SLAB_FREELIST_HARDENED is enabled. + */ +typedef struct { unsigned long v; } freeptr_t; + /* * A region containing a mapping of a non-memory backed file under NOMMU * conditions. These are held in a global tree and are pinned by the VMAs that @@ -677,6 +683,9 @@ struct vma_numab_state { * * Only explicitly marked struct members may be accessed by RCU readers before * getting a stable reference. + * + * WARNING: when adding new members, please update vm_area_init_from() to copy + * them during vm_area_struct content duplication. */ struct vm_area_struct { /* The first cache line has the info for VMA tree walking. */ @@ -687,9 +696,7 @@ struct vm_area_struct { unsigned long vm_start; unsigned long vm_end; }; -#ifdef CONFIG_PER_VMA_LOCK - struct rcu_head vm_rcu; /* Used for deferred freeing. */ -#endif + freeptr_t vm_freeptr; /* Pointer used by SLAB_TYPESAFE_BY_RCU */ }; /* diff --git a/include/linux/slab.h b/include/linux/slab.h index 10a971c2bde3..681b685b6c4e 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -234,12 +234,6 @@ enum _slab_flag_bits { #define SLAB_NO_OBJ_EXT __SLAB_FLAG_UNUSED #endif -/* - * freeptr_t represents a SLUB freelist pointer, which might be encoded - * and not dereferenceable if CONFIG_SLAB_FREELIST_HARDENED is enabled. - */ -typedef struct { unsigned long v; } freeptr_t; - /* * ZERO_SIZE_PTR will be returned for zero sized kmalloc requests. * diff --git a/kernel/fork.c b/kernel/fork.c index 9d9275783cf8..151b40627c14 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -449,6 +449,42 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) return vma; } +static void vm_area_init_from(const struct vm_area_struct *src, + struct vm_area_struct *dest) +{ + dest->vm_mm = src->vm_mm; + dest->vm_ops = src->vm_ops; + dest->vm_start = src->vm_start; + dest->vm_end = src->vm_end; + dest->anon_vma = src->anon_vma; + dest->vm_pgoff = src->vm_pgoff; + dest->vm_file = src->vm_file; + dest->vm_private_data = src->vm_private_data; + vm_flags_init(dest, src->vm_flags); + memcpy(&dest->vm_page_prot, &src->vm_page_prot, + sizeof(dest->vm_page_prot)); + /* + * src->shared.rb may be modified concurrently when called from + * dup_mmap(), but the clone will reinitialize it. + */ + data_race(memcpy(&dest->shared, &src->shared, sizeof(dest->shared))); + memcpy(&dest->vm_userfaultfd_ctx, &src->vm_userfaultfd_ctx, + sizeof(dest->vm_userfaultfd_ctx)); +#ifdef CONFIG_ANON_VMA_NAME + dest->anon_name = src->anon_name; +#endif +#ifdef CONFIG_SWAP + memcpy(&dest->swap_readahead_info, &src->swap_readahead_info, + sizeof(dest->swap_readahead_info)); +#endif +#ifndef CONFIG_MMU + dest->vm_region = src->vm_region; +#endif +#ifdef CONFIG_NUMA + dest->vm_policy = src->vm_policy; +#endif +} + struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) { struct vm_area_struct *new = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL); @@ -458,11 +494,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) ASSERT_EXCLUSIVE_WRITER(orig->vm_flags); ASSERT_EXCLUSIVE_WRITER(orig->vm_file); - /* - * orig->shared.rb may be modified concurrently, but the clone - * will be reinitialized. - */ - data_race(memcpy(new, orig, sizeof(*new))); + vm_area_init_from(orig, new); vma_lock_init(new, true); INIT_LIST_HEAD(&new->anon_vma_chain); vma_numab_state_init(new); @@ -471,7 +503,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) return new; } -void __vm_area_free(struct vm_area_struct *vma) +void vm_area_free(struct vm_area_struct *vma) { /* The vma should be detached while being destroyed. */ vma_assert_detached(vma); @@ -480,25 +512,6 @@ void __vm_area_free(struct vm_area_struct *vma) kmem_cache_free(vm_area_cachep, vma); } -#ifdef CONFIG_PER_VMA_LOCK -static void vm_area_free_rcu_cb(struct rcu_head *head) -{ - struct vm_area_struct *vma = container_of(head, struct vm_area_struct, - vm_rcu); - - __vm_area_free(vma); -} -#endif - -void vm_area_free(struct vm_area_struct *vma) -{ -#ifdef CONFIG_PER_VMA_LOCK - call_rcu(&vma->vm_rcu, vm_area_free_rcu_cb); -#else - __vm_area_free(vma); -#endif -} - static void account_kernel_stack(struct task_struct *tsk, int account) { if (IS_ENABLED(CONFIG_VMAP_STACK)) { @@ -3144,6 +3157,11 @@ void __init mm_cache_init(void) void __init proc_caches_init(void) { + struct kmem_cache_args args = { + .use_freeptr_offset = true, + .freeptr_offset = offsetof(struct vm_area_struct, vm_freeptr), + }; + sighand_cachep = kmem_cache_create("sighand_cache", sizeof(struct sighand_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU| @@ -3160,8 +3178,9 @@ void __init proc_caches_init(void) sizeof(struct fs_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, NULL); - vm_area_cachep = KMEM_CACHE(vm_area_struct, - SLAB_HWCACHE_ALIGN|SLAB_NO_MERGE|SLAB_PANIC| + vm_area_cachep = kmem_cache_create("vm_area_struct", + sizeof(struct vm_area_struct), &args, + SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU| SLAB_ACCOUNT); mmap_init(); nsproxy_cache_init(); diff --git a/mm/mmap.c b/mm/mmap.c index cda01071c7b1..7aa36216ecc0 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1305,7 +1305,8 @@ void exit_mmap(struct mm_struct *mm) do { if (vma->vm_flags & VM_ACCOUNT) nr_accounted += vma_pages(vma); - remove_vma(vma, /* unreachable = */ true); + vma_mark_detached(vma); + remove_vma(vma); count++; cond_resched(); vma = vma_next(&vmi); diff --git a/mm/vma.c b/mm/vma.c index 93ff42ac2002..0a5158d611e3 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -406,19 +406,14 @@ static bool can_vma_merge_right(struct vma_merge_struct *vmg, /* * Close a vm structure and free it. */ -void remove_vma(struct vm_area_struct *vma, bool unreachable) +void remove_vma(struct vm_area_struct *vma) { might_sleep(); vma_close(vma); if (vma->vm_file) fput(vma->vm_file); mpol_put(vma_policy(vma)); - if (unreachable) { - vma_mark_detached(vma); - __vm_area_free(vma); - } else { - vm_area_free(vma); - } + vm_area_free(vma); } /* @@ -1201,7 +1196,7 @@ static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms, /* Remove and clean up vmas */ mas_set(mas_detach, 0); mas_for_each(mas_detach, vma, ULONG_MAX) - remove_vma(vma, /* unreachable = */ false); + remove_vma(vma); vm_unacct_memory(vms->nr_accounted); validate_mm(mm); diff --git a/mm/vma.h b/mm/vma.h index 63dd38d5230c..f51005b95b39 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -170,7 +170,7 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm, unsigned long start, size_t len, struct list_head *uf, bool unlock); -void remove_vma(struct vm_area_struct *vma, bool unreachable); +void remove_vma(struct vm_area_struct *vma); void unmap_region(struct ma_state *mas, struct vm_area_struct *vma, struct vm_area_struct *prev, struct vm_area_struct *next); diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index 2ce032943861..49a85ce0d45a 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -697,14 +697,9 @@ static inline void mpol_put(struct mempolicy *) { } -static inline void __vm_area_free(struct vm_area_struct *vma) -{ - free(vma); -} - static inline void vm_area_free(struct vm_area_struct *vma) { - __vm_area_free(vma); + free(vma); } static inline void lru_add_drain(void)