From patchwork Tue Apr 8 09:22:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alice Ryhl X-Patchwork-Id: 14042481 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 665A0C3600C for ; Tue, 8 Apr 2025 09:24:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CC63C6B000A; Tue, 8 Apr 2025 05:24:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C77E26B000D; Tue, 8 Apr 2025 05:24:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A9FE26B000E; Tue, 8 Apr 2025 05:24:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 7E8E86B000A for ; Tue, 8 Apr 2025 05:24:05 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 1976DAC537 for ; Tue, 8 Apr 2025 09:24:06 +0000 (UTC) X-FDA: 83310340092.06.A380D9E Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf07.hostedemail.com (Postfix) with ESMTP id 1CF7740014 for ; Tue, 8 Apr 2025 09:24:03 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=wHwocs6x; spf=pass (imf07.hostedemail.com: domain of 3Muv0ZwkKCCM9KHBDQXGKFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--aliceryhl.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3Muv0ZwkKCCM9KHBDQXGKFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--aliceryhl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744104244; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9QyemcvIui2lVzxcGzSSLvwxhrDIAPa4Re4ibbE0iVw=; b=YKOcGHk+b/luRVDQtH55YT9g3IKcgtfRhIwPq+woJmoPOCCs+89Itp5NAq6qJa6CC5bcda DklHNRCIPJunzAKRtlpN+XIQoZM8CEXD9DHJDoiIWN60kl+EvJBQ9MTda5+SLcAt+3TJME BL5IB9te4PDJoOkBu1LMhPvS++pGWB8= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=wHwocs6x; spf=pass (imf07.hostedemail.com: domain of 3Muv0ZwkKCCM9KHBDQXGKFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--aliceryhl.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3Muv0ZwkKCCM9KHBDQXGKFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--aliceryhl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744104244; a=rsa-sha256; cv=none; b=j6JvxDE/exEK8J1F+tdM7nuM5CFq0H/NnmE/sbe9n2eEZ9zT0rdw+DDYizy8nOhVgPkFfK BHyB+auS33AqXeDK1p0GuCbm6j3fXbbSUBuBatP+WFkp++7tsVy6GpnV7/RVevLmNFetqg B+3KyypGtv+FSz3vo7rUHSzNUD1iSzs= Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4394c489babso26291395e9.1 for ; Tue, 08 Apr 2025 02:24:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1744104243; x=1744709043; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9QyemcvIui2lVzxcGzSSLvwxhrDIAPa4Re4ibbE0iVw=; b=wHwocs6xuamrPXY2UxehRnoNkwBaV+PiwShrQ12HwTbIIHH+9jbuSh3QLar8cdn0Ru +lFdo09sWSeGYd84qPnPYYa/bbFyF4z8/fAaCEE9/2kFCu2mYeeI3mGaWC7qqigwpUyN mQcmaebkPQbtUWH2+NtocagcNSFCRPnxj3xhrIe5Hawsgz6KGTAHtxbgTUkaioBZzFPT 9E93sCjh00wdClomm3yc/N/X2cv5UJAg11eqdmwUxDAFRG4hDaJWihQ//eZIDVF9qsrl HlVIAcej+DsglsrI/wYMrlRpDQ2nxoE/ZtBWNr4jl6cgvPQXM9IrdOd8XlHAYhF3ZxDS 6Hng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744104243; x=1744709043; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9QyemcvIui2lVzxcGzSSLvwxhrDIAPa4Re4ibbE0iVw=; b=vxec0iXjMts7Iji4AkPzZQt6r+hRTPZ5+f5SfwktnEJyWiehzf7Fb96cFQCAvg+i4Z jRIJzt/VCg2y9UFl2l+rzbUr9YIOASLEZqg1Eslup2b0dV2ILhsvVK5oYgR/vobvjIRQ 5SotZTrcbZ1qdCauecGnoJCoHhi0AMCfOtdSkTQ4cR3CUZbHqhUec+4hnFoAPH0jdGvc mnzWAAmI6qfhfXHvFvR+AXizwdxY1hDz8DZMnjcppz8m7VzBopFoV5WGnujGMH7B5KAl YzApK8o6p16GrPFyrum11u/C/h7xVk1CG6LKyoJW4bI3F0D4gFRZHORZamZiJhccP8qc A4NQ== X-Forwarded-Encrypted: i=1; AJvYcCWC7ZqYPvfuoqhQr7Q4HIksYOoSVVX1BLaP4JTG+O3x5fSbl3l1U/peZv9VGb4nZve1KyzsRYJtmA==@kvack.org X-Gm-Message-State: AOJu0YytRN7JsFmrxZaYV2OHPqmo9hIk3iacTEhLVkAIkh+W6jpKfmFQ pWTXacmA14u2PY6OJIwP4Ueix6s5Tnfoi4AhGaSG3IHRo0pMqpx7G+pUI+5Q0e2HRVd5/KQguvr Hqp+HVhckqtrw1Q== X-Google-Smtp-Source: AGHT+IFxxFUu1qGOXyvkCU5XAz4kFicPKHdX0TfjTmS5tS8Ie5mgbTz+4dEhe4qW9IslRJpSem7hRyy9es0XXr0= X-Received: from wmbh17.prod.google.com ([2002:a05:600c:a111:b0:43d:8f:dd29]) (user=aliceryhl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3b94:b0:43d:649:4e50 with SMTP id 5b1f17b1804b1-43f0ab8c6d2mr38195215e9.13.1744104242867; Tue, 08 Apr 2025 02:24:02 -0700 (PDT) Date: Tue, 08 Apr 2025 09:22:39 +0000 In-Reply-To: <20250408-vma-v16-0-d8b446e885d9@google.com> Mime-Version: 1.0 References: <20250408-vma-v16-0-d8b446e885d9@google.com> X-Developer-Key: i=aliceryhl@google.com; a=openpgp; fpr=49F6C1FAA74960F43A5B86A1EE7A392FDE96209F X-Developer-Signature: v=1; a=openpgp-sha256; l=12083; i=aliceryhl@google.com; h=from:subject:message-id; bh=GtH47wxbR+6d0aNTneO5pMigxZ/fp5PImw8k1ccf9Tg=; b=owEBbQKS/ZANAwAKAQRYvu5YxjlGAcsmYgBn9OsmVSvhUiA913nl+JAw+MlOHGWc8EvMXmmCp FwIUbqzQCWJAjMEAAEKAB0WIQSDkqKUTWQHCvFIvbIEWL7uWMY5RgUCZ/TrJgAKCRAEWL7uWMY5 RkWHEACjkc6iHWBOlrZetxMPhzthtkwd5NgmH0SN7rYXNpTp64E3opnX715hk0ef5PRZ5Nm/iIb OtkXcUfufrOdrLx+/qYdaVCOz06KYK7WD4uis3zU5B0Ri+aRTVulNIGlXRqepDmR3pYbp8PPF07 0Bu1WvumxR1Xim2M9XHDCOtyAd4G0vry0F+zjHGvWeleJmyha06PKWRZYj2bfpj+dtj7CHk+sE8 vaeAWQ91zCa0KwO8YUxDp2K1aQWMob4/+8GBkmrVq6XlGaWDd0cL6m6xqdcAaDXD9gBZ796xiaa tQyDvkO3tysYm+xTpw6NZ4pGfOIaFQugyjqhWbcLDuEPU7X6F67NZKIqf5ABJmqHcFniTyg8v9a 8uH2eJawIb6+7ZFgtzUdd2jcAmZxI58iVRiO2mzS5DMqX/KaCN+9nyf04fqol4eSsbJUGJDyxkw y387YiEA7VvHmYLbmmL0MqGs/u4koer7PHf2RPIANMWwarx4bceJPj2cB1L/JekMsMd6kC3uDNV i/5sM6BW2h+/uyu8vQMJBdH5xkUx0Rh73TCsADJOR22xGNHWo2v+9JFZU7kB8PwQBJrK3Px5GIc qBcE0dFheejnjo5NRQabW0CDywfmxR0dq969Q79nhVQDdkCNFzCNY4rCQ1YJ0IZvdgzRzXKqXkH cL7GVSoLH/eHi7w== X-Mailer: b4 0.14.2 Message-ID: <20250408-vma-v16-2-d8b446e885d9@google.com> Subject: [PATCH v16 2/9] mm: rust: add vm_area_struct methods that require read access From: Alice Ryhl To: Miguel Ojeda , Matthew Wilcox , Lorenzo Stoakes , Vlastimil Babka , John Hubbard , "Liam R. Howlett" , Andrew Morton , Greg Kroah-Hartman , Arnd Bergmann , Jann Horn , Suren Baghdasaryan Cc: Alex Gaynor , Boqun Feng , Gary Guo , " =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= " , Benno Lossin , Andreas Hindborg , Trevor Gross , linux-kernel@vger.kernel.org, linux-mm@kvack.org, rust-for-linux@vger.kernel.org, Alice Ryhl X-Rspamd-Queue-Id: 1CF7740014 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: woirokc8jjzbrzym36qdy3u8ugwoy944 X-HE-Tag: 1744104243-924243 X-HE-Meta: U2FsdGVkX1860yjDqdduRX8AiYQ8GQoooueX97YIwJtZi/TatTM8nkfMqOoZWK6fzbfHTQRvILhTr5nXJj8mpAtlYhjUFi7sZhDc17js2yG27OA2eVSFiZFzq8bj1K9HtC+YoxtP2+ra9eLpa0HgYPLP5Fr7JsNjJ6NSpdnSM+zkHUK3FgEjxh3cmYgH4MWhFjBfv9/+IsVViYhY1KjuSVw0takRpM/x0uf4KLaqiAj2uZ3t/u0wlcOK8UhgO9ILQIMUbRKc2J1OV2C/gqvP0OeD0G2EMbY1Il8oAalMIMzXpnQeY+Lhty5POSem4toX9Uh9PD+JG1b4UIuwrRaPGkj/lI9CrQdW4vxqDLz8jhlohfInTH6ujEnO5Am+HNakFKemSwToemr99yrjCqqgT89Pl0mDKgxvLHqZWJTSTtQTqJLJqODQ7m3suJQiyKCVIrG98tmsWUNdkMJbMPdjcRVpKavxiQpMVLwShyUIFaDQsBgG2fwQZPRkNfuCimdnCfbMgJ1cyqPwCKTJ3ppl9da1pnu1PyCg0Gd5BTT+FW5ygR2AIq+D4IkCoMpXH2YPfU6VxAsie0SEyYjlLJF+osEgpuXplwgM0bbRP9VqjJgLeNTpVVdF2n9YcIcDxqyZdMIPWWwzO2B91UusWxdzyC9uQS2w6/SyBX2fBvdX/utAX+1MQVgVZyXePXdH21uebX146esDAWxRfdthFaUa+QSQDq2pQlpSjZ7/PoNZ+a2go+pqfep5iaQBn0qE6GbIsmbmN8d/4VS8eBhiUgQs94ymaQsa2yIJgRq9ZjIzPOYD4P4BM7LmYGm8yrZkVc2SA7bX4BebFL3Uk4qinVeegpL7hUeZb+NdEPfB2K19Dc7Y0kUQX/VP+UpBGSdGvheViJFr0bMwxSwVRBfF1HPlaTK49GPDp9VYH/MFdex6vWIxEvS+f0m9/giShAWYlEFQ9q0egd8BjpRHIleWQoS EVylg8qU f9ym4I8WX+m/QfSjwckRkgxbxgzrXaOYfzgvChcTlFQfhQToH1DNaNtkUxCUEiUga/O8JiJlGCXNoiFgYNtbEB+q5W6Fl5MUP/vZ7VloMLKAsZtdwmPk6t1wIiq9AJtjTbWapAde7/oPgQmZDYMHDFV6T+lukL4fUHyQXNteBzM+chXeVXW3ipCIIZedfBJ/1k2fXicP4WuIJC8VudwaDuZMmHoYc27E/R2Z2XRy8wM+LNatumdpKgUynU3tuoxxzaAGdU+yFJrW1GrKFDACk2WxoS0SU/oTysVa7FUprL7B8GAcwcmJPZP2CltLJOA6nq3uQx7sstpCED/8cgOh6QZK/3etx8oWcEX7JJdgYxUIvc2qCMU9NxQAuul6OsZ0T2fd8ZvKMmuEsBY9L3EPL+aein4rF5goW0NmQ50LiOxtiIvSX/cWBXppEHtTdJ+aiZuEA25MQRaqIQMAUlYVag5FZe2hYOEL0r+GFvY5dMIqn8R8KSleDX+7LwICyiE/KJr9M64u9H2MOMXvPzGCg50c2aAwwJSVO8JcIgWdpktUTiw2OTs/rxZt39vaURmpaNn8b1uAVry1cYcCbtwarSeWc+ztwSOHVoBFlhieweH231VPBHD/oNlFl+8FoaxoG/3PzwzRWaht1GP7wVdObRWz9xjYVN9Oy5T5QvLINjtE5UkLi49uZbSH21AYTlhU1HYo/XZ7k30fjDsrLZJF+Wc/0O1UGOX2lJ7oEVHbUZkXsvhLrJ+2hVe5dRNGfOCICPxHDNVSRJXifwuEFG5eduXi7efNwrlpY0ZIF X-Bogosity: Ham, tests=bogofilter, spamicity=0.092493, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This adds a type called VmaRef which is used when referencing a vma that you have read access to. Here, read access means that you hold either the mmap read lock or the vma read lock (or stronger). Additionally, a vma_lookup method is added to the mmap read guard, which enables you to obtain a &VmaRef in safe Rust code. This patch only provides a way to lock the mmap read lock, but a follow-up patch also provides a way to just lock the vma read lock. Acked-by: Lorenzo Stoakes Acked-by: Liam R. Howlett Reviewed-by: Jann Horn Reviewed-by: Andreas Hindborg Reviewed-by: Gary Guo Signed-off-by: Alice Ryhl --- rust/helpers/mm.c | 6 ++ rust/kernel/mm.rs | 23 ++++++ rust/kernel/mm/virt.rs | 210 +++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 239 insertions(+) diff --git a/rust/helpers/mm.c b/rust/helpers/mm.c index 7201747a5d314b2b120b30c0b906715c04ca77a5..7b72eb065a3e1173c920f02a440053cf6e93814e 100644 --- a/rust/helpers/mm.c +++ b/rust/helpers/mm.c @@ -37,3 +37,9 @@ void rust_helper_mmap_read_unlock(struct mm_struct *mm) { mmap_read_unlock(mm); } + +struct vm_area_struct *rust_helper_vma_lookup(struct mm_struct *mm, + unsigned long addr) +{ + return vma_lookup(mm, addr); +} diff --git a/rust/kernel/mm.rs b/rust/kernel/mm.rs index eda7a479cff7e79760bb49eb4bb16209bbfc6147..f1689ccb374078a3141489e487fc32cd97c9c232 100644 --- a/rust/kernel/mm.rs +++ b/rust/kernel/mm.rs @@ -18,6 +18,8 @@ }; use core::{ops::Deref, ptr::NonNull}; +pub mod virt; + /// A wrapper for the kernel's `struct mm_struct`. /// /// This represents the address space of a userspace process, so each process has one `Mm` @@ -201,6 +203,27 @@ pub struct MmapReadGuard<'a> { _nts: NotThreadSafe, } +impl<'a> MmapReadGuard<'a> { + /// Look up a vma at the given address. + #[inline] + pub fn vma_lookup(&self, vma_addr: usize) -> Option<&virt::VmaRef> { + // SAFETY: By the type invariants we hold the mmap read guard, so we can safely call this + // method. Any value is okay for `vma_addr`. + let vma = unsafe { bindings::vma_lookup(self.mm.as_raw(), vma_addr) }; + + if vma.is_null() { + None + } else { + // SAFETY: We just checked that a vma was found, so the pointer references a valid vma. + // + // Furthermore, the returned vma is still under the protection of the read lock guard + // and can be used while the mmap read lock is still held. That the vma is not used + // after the MmapReadGuard gets dropped is enforced by the borrow-checker. + unsafe { Some(virt::VmaRef::from_raw(vma)) } + } + } +} + impl Drop for MmapReadGuard<'_> { #[inline] fn drop(&mut self) { diff --git a/rust/kernel/mm/virt.rs b/rust/kernel/mm/virt.rs new file mode 100644 index 0000000000000000000000000000000000000000..a66be649f0b8d3dfae8ce2d18b70cb2b283fb7fe --- /dev/null +++ b/rust/kernel/mm/virt.rs @@ -0,0 +1,210 @@ +// SPDX-License-Identifier: GPL-2.0 + +// Copyright (C) 2024 Google LLC. + +//! Virtual memory. +//! +//! This module deals with managing a single VMA in the address space of a userspace process. Each +//! VMA corresponds to a region of memory that the userspace process can access, and the VMA lets +//! you control what happens when userspace reads or writes to that region of memory. +//! +//! The module has several different Rust types that all correspond to the C type called +//! `vm_area_struct`. The different structs represent what kind of access you have to the VMA, e.g. +//! [`VmaRef`] is used when you hold the mmap or vma read lock. Using the appropriate struct +//! ensures that you can't, for example, accidentally call a function that requires holding the +//! write lock when you only hold the read lock. + +use crate::{bindings, mm::MmWithUser, types::Opaque}; + +/// A wrapper for the kernel's `struct vm_area_struct` with read access. +/// +/// It represents an area of virtual memory. +/// +/// # Invariants +/// +/// The caller must hold the mmap read lock or the vma read lock. +#[repr(transparent)] +pub struct VmaRef { + vma: Opaque, +} + +// Methods you can call when holding the mmap or vma read lock (or stronger). They must be usable +// no matter what the vma flags are. +impl VmaRef { + /// Access a virtual memory area given a raw pointer. + /// + /// # Safety + /// + /// Callers must ensure that `vma` is valid for the duration of 'a, and that the mmap or vma + /// read lock (or stronger) is held for at least the duration of 'a. + #[inline] + pub unsafe fn from_raw<'a>(vma: *const bindings::vm_area_struct) -> &'a Self { + // SAFETY: The caller ensures that the invariants are satisfied for the duration of 'a. + unsafe { &*vma.cast() } + } + + /// Returns a raw pointer to this area. + #[inline] + pub fn as_ptr(&self) -> *mut bindings::vm_area_struct { + self.vma.get() + } + + /// Access the underlying `mm_struct`. + #[inline] + pub fn mm(&self) -> &MmWithUser { + // SAFETY: By the type invariants, this `vm_area_struct` is valid and we hold the mmap/vma + // read lock or stronger. This implies that the underlying mm has a non-zero value of + // `mm_users`. + unsafe { MmWithUser::from_raw((*self.as_ptr()).vm_mm) } + } + + /// Returns the flags associated with the virtual memory area. + /// + /// The possible flags are a combination of the constants in [`flags`]. + #[inline] + pub fn flags(&self) -> vm_flags_t { + // SAFETY: By the type invariants, the caller holds at least the mmap read lock, so this + // access is not a data race. + unsafe { (*self.as_ptr()).__bindgen_anon_2.vm_flags } + } + + /// Returns the (inclusive) start address of the virtual memory area. + #[inline] + pub fn start(&self) -> usize { + // SAFETY: By the type invariants, the caller holds at least the mmap read lock, so this + // access is not a data race. + unsafe { (*self.as_ptr()).__bindgen_anon_1.__bindgen_anon_1.vm_start } + } + + /// Returns the (exclusive) end address of the virtual memory area. + #[inline] + pub fn end(&self) -> usize { + // SAFETY: By the type invariants, the caller holds at least the mmap read lock, so this + // access is not a data race. + unsafe { (*self.as_ptr()).__bindgen_anon_1.__bindgen_anon_1.vm_end } + } + + /// Zap pages in the given page range. + /// + /// This clears page table mappings for the range at the leaf level, leaving all other page + /// tables intact, and freeing any memory referenced by the VMA in this range. That is, + /// anonymous memory is completely freed, file-backed memory has its reference count on page + /// cache folio's dropped, any dirty data will still be written back to disk as usual. + /// + /// It may seem odd that we clear at the leaf level, this is however a product of the page + /// table structure used to map physical memory into a virtual address space - each virtual + /// address actually consists of a bitmap of array indices into page tables, which form a + /// hierarchical page table level structure. + /// + /// As a result, each page table level maps a multiple of page table levels below, and thus + /// span ever increasing ranges of pages. At the leaf or PTE level, we map the actual physical + /// memory. + /// + /// It is here where a zap operates, as it the only place we can be certain of clearing without + /// impacting any other virtual mappings. It is an implementation detail as to whether the + /// kernel goes further in freeing unused page tables, but for the purposes of this operation + /// we must only assume that the leaf level is cleared. + #[inline] + pub fn zap_page_range_single(&self, address: usize, size: usize) { + let (end, did_overflow) = address.overflowing_add(size); + if did_overflow || address < self.start() || self.end() < end { + // TODO: call WARN_ONCE once Rust version of it is added + return; + } + + // SAFETY: By the type invariants, the caller has read access to this VMA, which is + // sufficient for this method call. This method has no requirements on the vma flags. The + // address range is checked to be within the vma. + unsafe { + bindings::zap_page_range_single(self.as_ptr(), address, size, core::ptr::null_mut()) + }; + } +} + +/// The integer type used for vma flags. +#[doc(inline)] +pub use bindings::vm_flags_t; + +/// All possible flags for [`VmaRef`]. +pub mod flags { + use super::vm_flags_t; + use crate::bindings; + + /// No flags are set. + pub const NONE: vm_flags_t = bindings::VM_NONE as _; + + /// Mapping allows reads. + pub const READ: vm_flags_t = bindings::VM_READ as _; + + /// Mapping allows writes. + pub const WRITE: vm_flags_t = bindings::VM_WRITE as _; + + /// Mapping allows execution. + pub const EXEC: vm_flags_t = bindings::VM_EXEC as _; + + /// Mapping is shared. + pub const SHARED: vm_flags_t = bindings::VM_SHARED as _; + + /// Mapping may be updated to allow reads. + pub const MAYREAD: vm_flags_t = bindings::VM_MAYREAD as _; + + /// Mapping may be updated to allow writes. + pub const MAYWRITE: vm_flags_t = bindings::VM_MAYWRITE as _; + + /// Mapping may be updated to allow execution. + pub const MAYEXEC: vm_flags_t = bindings::VM_MAYEXEC as _; + + /// Mapping may be updated to be shared. + pub const MAYSHARE: vm_flags_t = bindings::VM_MAYSHARE as _; + + /// Page-ranges managed without `struct page`, just pure PFN. + pub const PFNMAP: vm_flags_t = bindings::VM_PFNMAP as _; + + /// Memory mapped I/O or similar. + pub const IO: vm_flags_t = bindings::VM_IO as _; + + /// Do not copy this vma on fork. + pub const DONTCOPY: vm_flags_t = bindings::VM_DONTCOPY as _; + + /// Cannot expand with mremap(). + pub const DONTEXPAND: vm_flags_t = bindings::VM_DONTEXPAND as _; + + /// Lock the pages covered when they are faulted in. + pub const LOCKONFAULT: vm_flags_t = bindings::VM_LOCKONFAULT as _; + + /// Is a VM accounted object. + pub const ACCOUNT: vm_flags_t = bindings::VM_ACCOUNT as _; + + /// Should the VM suppress accounting. + pub const NORESERVE: vm_flags_t = bindings::VM_NORESERVE as _; + + /// Huge TLB Page VM. + pub const HUGETLB: vm_flags_t = bindings::VM_HUGETLB as _; + + /// Synchronous page faults. (DAX-specific) + pub const SYNC: vm_flags_t = bindings::VM_SYNC as _; + + /// Architecture-specific flag. + pub const ARCH_1: vm_flags_t = bindings::VM_ARCH_1 as _; + + /// Wipe VMA contents in child on fork. + pub const WIPEONFORK: vm_flags_t = bindings::VM_WIPEONFORK as _; + + /// Do not include in the core dump. + pub const DONTDUMP: vm_flags_t = bindings::VM_DONTDUMP as _; + + /// Not soft dirty clean area. + pub const SOFTDIRTY: vm_flags_t = bindings::VM_SOFTDIRTY as _; + + /// Can contain `struct page` and pure PFN pages. + pub const MIXEDMAP: vm_flags_t = bindings::VM_MIXEDMAP as _; + + /// MADV_HUGEPAGE marked this vma. + pub const HUGEPAGE: vm_flags_t = bindings::VM_HUGEPAGE as _; + + /// MADV_NOHUGEPAGE marked this vma. + pub const NOHUGEPAGE: vm_flags_t = bindings::VM_NOHUGEPAGE as _; + + /// KSM may merge identical pages. + pub const MERGEABLE: vm_flags_t = bindings::VM_MERGEABLE as _; +}