diff mbox series

rust: mm: add abstractions for mm_struct and vm_area_struct

Message ID 20240723-vma-v1-1-32ad5a0118ee@google.com (mailing list archive)
State New
Headers show
Series rust: mm: add abstractions for mm_struct and vm_area_struct | expand

Commit Message

Alice Ryhl July 23, 2024, 2:32 p.m. UTC
This is a follow-up to the page abstractions [1] that were recently
merged in 6.11. Rust Binder will need these abstractions to manipulate
the vma in its implementation of the mmap fop on the Binder file.

The ARef wrapper is not used for mm_struct because there are several
different types of refcounts.

This patch is based on Wedson's implementation on the old rust branch,
but has been changed significantly. All mistakes are Alice's.

Link: https://lore.kernel.org/r/20240528-alice-mm-v7-4-78222c31b8f4@google.com [1]
Co-developed-by: Wedson Almeida Filho <wedsonaf@gmail.com>
Signed-off-by: Wedson Almeida Filho <wedsonaf@gmail.com>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 rust/helpers.c         |  49 ++++++++++
 rust/kernel/lib.rs     |   1 +
 rust/kernel/mm.rs      | 259 +++++++++++++++++++++++++++++++++++++++++++++++++
 rust/kernel/mm/virt.rs | 193 ++++++++++++++++++++++++++++++++++++
 4 files changed, 502 insertions(+)


---
base-commit: b1263411112305acf2af728728591465becb45b0
change-id: 20240723-vma-f80119f9fb35

Best regards,

Comments

Matthew Wilcox July 23, 2024, 2:50 p.m. UTC | #1
On Tue, Jul 23, 2024 at 02:32:03PM +0000, Alice Ryhl wrote:
> +// SAFETY: It is safe to call `mmdrop` on another thread than where `mmgrab` was called.

If I were reading the documentation, I might want to know if it's safe
to call in interrupt context (either soft or hard).  ie can mmdrop
sleep (if it turns out to be the last owner of the mm).

> +/// A wrapper for the kernel's `struct vm_area_struct`.
> +///
> +/// It represents an area of virtual memory.
> +#[repr(transparent)]
> +pub struct Area {
> +    vma: Opaque<bindings::vm_area_struct>,
> +}

That seems like a very generic name!  MMArea?  VMA?  Certainly when I'm
talking to people, I say VMA.  struct vm_area_struct is a terrible name
and I'd be happy to change it if we could stomach that churn.  If I were
naming it today, I'd want to call it struct mm_area.
Alice Ryhl July 23, 2024, 3:04 p.m. UTC | #2
On Tue, Jul 23, 2024 at 4:50 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Tue, Jul 23, 2024 at 02:32:03PM +0000, Alice Ryhl wrote:
> > +// SAFETY: It is safe to call `mmdrop` on another thread than where `mmgrab` was called.
>
> If I were reading the documentation, I might want to know if it's safe
> to call in interrupt context (either soft or hard).  ie can mmdrop
> sleep (if it turns out to be the last owner of the mm).

I'll add some information on that.

> > +/// A wrapper for the kernel's `struct vm_area_struct`.
> > +///
> > +/// It represents an area of virtual memory.
> > +#[repr(transparent)]
> > +pub struct Area {
> > +    vma: Opaque<bindings::vm_area_struct>,
> > +}
>
> That seems like a very generic name!  MMArea?  VMA?  Certainly when I'm
> talking to people, I say VMA.  struct vm_area_struct is a terrible name
> and I'd be happy to change it if we could stomach that churn.  If I were
> naming it today, I'd want to call it struct mm_area.

Yeah, you're right. I should change it. Renaming the C struct seems
like it would be a lot of work. For now, I'll rename it to VmArea to
match C.

Alice
Benno Lossin July 26, 2024, 8:10 a.m. UTC | #3
On 23.07.24 16:32, Alice Ryhl wrote:
> This is a follow-up to the page abstractions [1] that were recently
> merged in 6.11. Rust Binder will need these abstractions to manipulate
> the vma in its implementation of the mmap fop on the Binder file.
> 
> The ARef wrapper is not used for mm_struct because there are several
> different types of refcounts.

I am confused, why can't you use the `ARef` wrapper for the different
types that you create below?

> This patch is based on Wedson's implementation on the old rust branch,
> but has been changed significantly. All mistakes are Alice's.
> 
> Link: https://lore.kernel.org/r/20240528-alice-mm-v7-4-78222c31b8f4@google.com [1]
> Co-developed-by: Wedson Almeida Filho <wedsonaf@gmail.com>
> Signed-off-by: Wedson Almeida Filho <wedsonaf@gmail.com>
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> ---
>  rust/helpers.c         |  49 ++++++++++
>  rust/kernel/lib.rs     |   1 +
>  rust/kernel/mm.rs      | 259 +++++++++++++++++++++++++++++++++++++++++++++++++
>  rust/kernel/mm/virt.rs | 193 ++++++++++++++++++++++++++++++++++++
>  4 files changed, 502 insertions(+)
> 
> diff --git a/rust/helpers.c b/rust/helpers.c
> index 305f0577fae9..9aa5150ebe26 100644
> --- a/rust/helpers.c
> +++ b/rust/helpers.c
> @@ -199,6 +199,55 @@ rust_helper_krealloc(const void *objp, size_t new_size, gfp_t flags)
>  }
>  EXPORT_SYMBOL_GPL(rust_helper_krealloc);
> 
> +void rust_helper_mmgrab(struct mm_struct *mm)
> +{
> +	mmgrab(mm);
> +}
> +EXPORT_SYMBOL_GPL(rust_helper_mmgrab);
> +
> +void rust_helper_mmdrop(struct mm_struct *mm)
> +{
> +	mmdrop(mm);
> +}
> +EXPORT_SYMBOL_GPL(rust_helper_mmdrop);
> +
> +bool rust_helper_mmget_not_zero(struct mm_struct *mm)
> +{
> +	return mmget_not_zero(mm);
> +}
> +EXPORT_SYMBOL_GPL(rust_helper_mmget_not_zero);
> +
> +bool rust_helper_mmap_read_trylock(struct mm_struct *mm)
> +{
> +	return mmap_read_trylock(mm);
> +}
> +EXPORT_SYMBOL_GPL(rust_helper_mmap_read_trylock);
> +
> +void rust_helper_mmap_read_unlock(struct mm_struct *mm)
> +{
> +	mmap_read_unlock(mm);
> +}
> +EXPORT_SYMBOL_GPL(rust_helper_mmap_read_unlock);
> +
> +void rust_helper_mmap_write_lock(struct mm_struct *mm)
> +{
> +	mmap_write_lock(mm);
> +}
> +EXPORT_SYMBOL_GPL(rust_helper_mmap_write_lock);
> +
> +void rust_helper_mmap_write_unlock(struct mm_struct *mm)
> +{
> +	mmap_write_unlock(mm);
> +}
> +EXPORT_SYMBOL_GPL(rust_helper_mmap_write_unlock);
> +
> +struct vm_area_struct *rust_helper_vma_lookup(struct mm_struct *mm,
> +					      unsigned long addr)
> +{
> +	return vma_lookup(mm, addr);
> +}
> +EXPORT_SYMBOL_GPL(rust_helper_vma_lookup);
> +
>  /*
>   * `bindgen` binds the C `size_t` type as the Rust `usize` type, so we can
>   * use it in contexts where Rust expects a `usize` like slice (array) indices.
> diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
> index 5d310e79485f..3cbc4cf847a2 100644
> --- a/rust/kernel/lib.rs
> +++ b/rust/kernel/lib.rs
> @@ -33,6 +33,7 @@
>  pub mod ioctl;
>  #[cfg(CONFIG_KUNIT)]
>  pub mod kunit;
> +pub mod mm;
>  #[cfg(CONFIG_NET)]
>  pub mod net;
>  pub mod page;
> diff --git a/rust/kernel/mm.rs b/rust/kernel/mm.rs
> new file mode 100644
> index 000000000000..7fa1e2431944
> --- /dev/null
> +++ b/rust/kernel/mm.rs
> @@ -0,0 +1,259 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +// Copyright (C) 2024 Google LLC.
> +
> +//! Memory management.
> +//!
> +//! C header: [`include/linux/mm.h`](../../../../include/linux/mm.h)
> +
> +use crate::bindings;
> +
> +use core::{marker::PhantomData, mem::ManuallyDrop, ptr::NonNull};
> +
> +pub mod virt;
> +
> +/// A smart pointer that references a `struct mm` and owns an `mmgrab` refcount.
> +///
> +/// # Invariants
> +///
> +/// An `MmGrab` owns an `mmgrab` refcount to the inner `struct mm_struct`.

You also need that `mm` is a valid pointer.

> +pub struct MmGrab {
> +    mm: NonNull<bindings::mm_struct>,
> +}
> +
> +impl MmGrab {
> +    /// Call `mmgrab` on `current.mm`.
> +    #[inline]
> +    pub fn mmgrab_current() -> Option<Self> {
> +        // SAFETY: It's safe to get the `mm` field from current.
> +        let mm = unsafe {
> +            let current = bindings::get_current();
> +            (*current).mm
> +        };
> +
> +        let mm = NonNull::new(mm)?;
> +
> +        // SAFETY: We just checked that `mm` is not null.
> +        unsafe { bindings::mmgrab(mm.as_ptr()) };
> +
> +        // INVARIANT: We just created an `mmgrab` refcount.
> +        Some(Self { mm })
> +    }
> +
> +    /// Check whether this vma is associated with this mm.
> +    #[inline]
> +    pub fn is_same_mm(&self, area: &virt::Area) -> bool {
> +        // SAFETY: The `vm_mm` field of the area is immutable, so we can read it without
> +        // synchronization.
> +        let vm_mm = unsafe { (*area.as_ptr()).vm_mm };
> +
> +        vm_mm == self.mm.as_ptr()
> +    }
> +
> +    /// Calls `mmget_not_zero` and returns a handle if it succeeds.
> +    #[inline]
> +    pub fn mmget_not_zero(&self) -> Option<MmGet> {
> +        // SAFETY: We know that `mm` is still valid since we hold an `mmgrab` refcount.
> +        let success = unsafe { bindings::mmget_not_zero(self.mm.as_ptr()) };
> +
> +        if success {
> +            Some(MmGet { mm: self.mm })
> +        } else {
> +            None
> +        }
> +    }
> +}
> +
> +// SAFETY: It is safe to call `mmdrop` on another thread than where `mmgrab` was called.
> +unsafe impl Send for MmGrab {}
> +// SAFETY: All methods on this struct are safe to call in parallel from several threads.
> +unsafe impl Sync for MmGrab {}
> +
> +impl Drop for MmGrab {
> +    #[inline]
> +    fn drop(&mut self) {
> +        // SAFETY: This gives up an `mmgrab` refcount to a valid `struct mm_struct`.
> +        // INVARIANT: We own an `mmgrab` refcount, so we can give it up.

This INVARIANT comment seems out of place and the SAFETY comment should
probably be "By the type invariant of `Self`, we own an `mmgrab`
refcount and `self.mm` is valid.".

> +        unsafe { bindings::mmdrop(self.mm.as_ptr()) };
> +    }
> +}
> +
> +/// A smart pointer that references a `struct mm` and owns an `mmget` refcount.
> +///
> +/// Values of this type are created using [`MmGrab::mmget_not_zero`].
> +///
> +/// # Invariants
> +///
> +/// An `MmGet` owns an `mmget` refcount to the inner `struct mm_struct`.

Ditto with the valid pointer here and below.

> +pub struct MmGet {
> +    mm: NonNull<bindings::mm_struct>,
> +}
> +
> +impl MmGet {
> +    /// Lock the mmap read lock.
> +    #[inline]
> +    pub fn mmap_write_lock(&self) -> MmapWriteLock<'_> {
> +        // SAFETY: The pointer is valid since we hold a refcount.
> +        unsafe { bindings::mmap_write_lock(self.mm.as_ptr()) };
> +
> +        // INVARIANT: We just acquired the write lock, so we can transfer to this guard.
> +        //
> +        // The signature of this function ensures that the `MmapWriteLock` will not outlive this
> +        // `mmget` refcount.
> +        MmapWriteLock {
> +            mm: self.mm,
> +            _lifetime: PhantomData,
> +        }
> +    }
> +
> +    /// When dropping this refcount, use `mmput_async` instead of `mmput`.

I don't get this comment.

> +    #[inline]
> +    pub fn use_async_put(self) -> MmGetAsync {
> +        // Disable destructor of `self`.
> +        let me = ManuallyDrop::new(self);
> +
> +        MmGetAsync { mm: me.mm }
> +    }
> +}
> +
> +impl Drop for MmGet {
> +    #[inline]
> +    fn drop(&mut self) {
> +        // SAFETY: We acquired a refcount when creating this object.

You can just copy-paste the SAFETY comment from above (if you don't use
the `ARef` pattern). Ditto below.

---
Cheers,
Benno

> +        unsafe { bindings::mmput(self.mm.as_ptr()) };
> +    }
> +}
> +
Benno Lossin July 26, 2024, 8:26 a.m. UTC | #4
On 26.07.24 10:14, Alice Ryhl wrote:
> On Fri, Jul 26, 2024 at 10:11 AM Benno Lossin <benno.lossin@proton.me> wrote:
>>
>> On 23.07.24 16:32, Alice Ryhl wrote:
>>> +pub struct MmGet {
>>> +    mm: NonNull<bindings::mm_struct>,
>>> +}
>>> +
>>> +impl MmGet {
>>> +    /// Lock the mmap read lock.
>>> +    #[inline]
>>> +    pub fn mmap_write_lock(&self) -> MmapWriteLock<'_> {
>>> +        // SAFETY: The pointer is valid since we hold a refcount.
>>> +        unsafe { bindings::mmap_write_lock(self.mm.as_ptr()) };
>>> +
>>> +        // INVARIANT: We just acquired the write lock, so we can transfer to this guard.
>>> +        //
>>> +        // The signature of this function ensures that the `MmapWriteLock` will not outlive this
>>> +        // `mmget` refcount.
>>> +        MmapWriteLock {
>>> +            mm: self.mm,
>>> +            _lifetime: PhantomData,
>>> +        }
>>> +    }
>>> +
>>> +    /// When dropping this refcount, use `mmput_async` instead of `mmput`.
>>
>> I don't get this comment.
> 
> The C side provides two ways to decrement the mmget refcount. One is
> mmput and the other is mmput_async. The difference is that when the
> refcount hits zero, mmput_async cleans up the mm_struct on the
> workqueue, whereas mmput cleans it up immediately. This means that
> mmput_async is safe in atomic context, but mmput is not.

I see, IMO this would be a better comment:

/// Converts to this `MmGet` to `MmGetAsync`.
///
/// `MmGetAsync` uses `mmput_async` instead of `mmput` for decrementing
/// the refcount.

Since from a Rust perspective, this is just a conversion function. Maybe
the name should also reflect that ie `to_mm_get_async` or similar.

---
Cheers,
Benno
Alice Ryhl July 26, 2024, 8:32 a.m. UTC | #5
On Fri, Jul 26, 2024 at 10:11 AM Benno Lossin <benno.lossin@proton.me> wrote:
>
> On 23.07.24 16:32, Alice Ryhl wrote:
> > This is a follow-up to the page abstractions [1] that were recently
> > merged in 6.11. Rust Binder will need these abstractions to manipulate
> > the vma in its implementation of the mmap fop on the Binder file.
> >
> > The ARef wrapper is not used for mm_struct because there are several
> > different types of refcounts.
>
> I am confused, why can't you use the `ARef` wrapper for the different
> types that you create below?

Well, maybe I can, but it means we have several wrapper structs of
Opaque<mm_struct>. Would it not be confusing? Could you suggest a
naming scheme for the structs I should have?

Alice
Alice Ryhl July 26, 2024, 8:33 a.m. UTC | #6
On Fri, Jul 26, 2024 at 10:26 AM Benno Lossin <benno.lossin@proton.me> wrote:
>
> On 26.07.24 10:14, Alice Ryhl wrote:
> > On Fri, Jul 26, 2024 at 10:11 AM Benno Lossin <benno.lossin@proton.me> wrote:
> >>
> >> On 23.07.24 16:32, Alice Ryhl wrote:
> >>> +pub struct MmGet {
> >>> +    mm: NonNull<bindings::mm_struct>,
> >>> +}
> >>> +
> >>> +impl MmGet {
> >>> +    /// Lock the mmap read lock.
> >>> +    #[inline]
> >>> +    pub fn mmap_write_lock(&self) -> MmapWriteLock<'_> {
> >>> +        // SAFETY: The pointer is valid since we hold a refcount.
> >>> +        unsafe { bindings::mmap_write_lock(self.mm.as_ptr()) };
> >>> +
> >>> +        // INVARIANT: We just acquired the write lock, so we can transfer to this guard.
> >>> +        //
> >>> +        // The signature of this function ensures that the `MmapWriteLock` will not outlive this
> >>> +        // `mmget` refcount.
> >>> +        MmapWriteLock {
> >>> +            mm: self.mm,
> >>> +            _lifetime: PhantomData,
> >>> +        }
> >>> +    }
> >>> +
> >>> +    /// When dropping this refcount, use `mmput_async` instead of `mmput`.
> >>
> >> I don't get this comment.
> >
> > The C side provides two ways to decrement the mmget refcount. One is
> > mmput and the other is mmput_async. The difference is that when the
> > refcount hits zero, mmput_async cleans up the mm_struct on the
> > workqueue, whereas mmput cleans it up immediately. This means that
> > mmput_async is safe in atomic context, but mmput is not.
>
> I see, IMO this would be a better comment:
>
> /// Converts to this `MmGet` to `MmGetAsync`.
> ///
> /// `MmGetAsync` uses `mmput_async` instead of `mmput` for decrementing
> /// the refcount.
>
> Since from a Rust perspective, this is just a conversion function. Maybe
> the name should also reflect that ie `to_mm_get_async` or similar.

That sounds good to me.

Alice
Benno Lossin July 26, 2024, 1:36 p.m. UTC | #7
On 26.07.24 10:32, Alice Ryhl wrote:
> On Fri, Jul 26, 2024 at 10:11 AM Benno Lossin <benno.lossin@proton.me> wrote:
>>
>> On 23.07.24 16:32, Alice Ryhl wrote:
>>> This is a follow-up to the page abstractions [1] that were recently
>>> merged in 6.11. Rust Binder will need these abstractions to manipulate
>>> the vma in its implementation of the mmap fop on the Binder file.
>>>
>>> The ARef wrapper is not used for mm_struct because there are several
>>> different types of refcounts.
>>
>> I am confused, why can't you use the `ARef` wrapper for the different
>> types that you create below?
> 
> Well, maybe I can, but it means we have several wrapper structs of
> Opaque<mm_struct>. Would it not be confusing? Could you suggest a
> naming scheme for the structs I should have?

I don't know of a good way to avoid that, IMO your current
implementation has the same issue (multiple wrappers). So I don't think
it's that bad to have multiple wrappers for one C struct.
We could also use generics to solve this, right? I am not sure about the
ergonomics/looks, so for example:
- ARef<Mm<Grab>>
- ARef<Mm<Get>>
- ARef<Mm<Async>>

I think it looks fine, then you also only have one struct wrapper.

BTW what does "mm" stand for? Memory management?

---
Cheers,
Benno
Alice Ryhl July 26, 2024, 7:04 p.m. UTC | #8
On Fri, Jul 26, 2024 at 3:37 PM Benno Lossin <benno.lossin@proton.me> wrote:
>
> On 26.07.24 10:32, Alice Ryhl wrote:
> > On Fri, Jul 26, 2024 at 10:11 AM Benno Lossin <benno.lossin@proton.me> wrote:
> >>
> >> On 23.07.24 16:32, Alice Ryhl wrote:
> >>> This is a follow-up to the page abstractions [1] that were recently
> >>> merged in 6.11. Rust Binder will need these abstractions to manipulate
> >>> the vma in its implementation of the mmap fop on the Binder file.
> >>>
> >>> The ARef wrapper is not used for mm_struct because there are several
> >>> different types of refcounts.
> >>
> >> I am confused, why can't you use the `ARef` wrapper for the different
> >> types that you create below?
> >
> > Well, maybe I can, but it means we have several wrapper structs of
> > Opaque<mm_struct>. Would it not be confusing? Could you suggest a
> > naming scheme for the structs I should have?
>
> I don't know of a good way to avoid that, IMO your current
> implementation has the same issue (multiple wrappers). So I don't think
> it's that bad to have multiple wrappers for one C struct.
> We could also use generics to solve this, right? I am not sure about the
> ergonomics/looks, so for example:
> - ARef<Mm<Grab>>
> - ARef<Mm<Get>>
> - ARef<Mm<Async>>
>
> I think it looks fine, then you also only have one struct wrapper.
>
> BTW what does "mm" stand for? Memory management?

mm stands for memory management. Basically, an mm_struct keeps track
of the address space of a process, as far as I understand.

Alice
diff mbox series

Patch

diff --git a/rust/helpers.c b/rust/helpers.c
index 305f0577fae9..9aa5150ebe26 100644
--- a/rust/helpers.c
+++ b/rust/helpers.c
@@ -199,6 +199,55 @@  rust_helper_krealloc(const void *objp, size_t new_size, gfp_t flags)
 }
 EXPORT_SYMBOL_GPL(rust_helper_krealloc);
 
+void rust_helper_mmgrab(struct mm_struct *mm)
+{
+	mmgrab(mm);
+}
+EXPORT_SYMBOL_GPL(rust_helper_mmgrab);
+
+void rust_helper_mmdrop(struct mm_struct *mm)
+{
+	mmdrop(mm);
+}
+EXPORT_SYMBOL_GPL(rust_helper_mmdrop);
+
+bool rust_helper_mmget_not_zero(struct mm_struct *mm)
+{
+	return mmget_not_zero(mm);
+}
+EXPORT_SYMBOL_GPL(rust_helper_mmget_not_zero);
+
+bool rust_helper_mmap_read_trylock(struct mm_struct *mm)
+{
+	return mmap_read_trylock(mm);
+}
+EXPORT_SYMBOL_GPL(rust_helper_mmap_read_trylock);
+
+void rust_helper_mmap_read_unlock(struct mm_struct *mm)
+{
+	mmap_read_unlock(mm);
+}
+EXPORT_SYMBOL_GPL(rust_helper_mmap_read_unlock);
+
+void rust_helper_mmap_write_lock(struct mm_struct *mm)
+{
+	mmap_write_lock(mm);
+}
+EXPORT_SYMBOL_GPL(rust_helper_mmap_write_lock);
+
+void rust_helper_mmap_write_unlock(struct mm_struct *mm)
+{
+	mmap_write_unlock(mm);
+}
+EXPORT_SYMBOL_GPL(rust_helper_mmap_write_unlock);
+
+struct vm_area_struct *rust_helper_vma_lookup(struct mm_struct *mm,
+					      unsigned long addr)
+{
+	return vma_lookup(mm, addr);
+}
+EXPORT_SYMBOL_GPL(rust_helper_vma_lookup);
+
 /*
  * `bindgen` binds the C `size_t` type as the Rust `usize` type, so we can
  * use it in contexts where Rust expects a `usize` like slice (array) indices.
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index 5d310e79485f..3cbc4cf847a2 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -33,6 +33,7 @@ 
 pub mod ioctl;
 #[cfg(CONFIG_KUNIT)]
 pub mod kunit;
+pub mod mm;
 #[cfg(CONFIG_NET)]
 pub mod net;
 pub mod page;
diff --git a/rust/kernel/mm.rs b/rust/kernel/mm.rs
new file mode 100644
index 000000000000..7fa1e2431944
--- /dev/null
+++ b/rust/kernel/mm.rs
@@ -0,0 +1,259 @@ 
+// SPDX-License-Identifier: GPL-2.0
+
+// Copyright (C) 2024 Google LLC.
+
+//! Memory management.
+//!
+//! C header: [`include/linux/mm.h`](../../../../include/linux/mm.h)
+
+use crate::bindings;
+
+use core::{marker::PhantomData, mem::ManuallyDrop, ptr::NonNull};
+
+pub mod virt;
+
+/// A smart pointer that references a `struct mm` and owns an `mmgrab` refcount.
+///
+/// # Invariants
+///
+/// An `MmGrab` owns an `mmgrab` refcount to the inner `struct mm_struct`.
+pub struct MmGrab {
+    mm: NonNull<bindings::mm_struct>,
+}
+
+impl MmGrab {
+    /// Call `mmgrab` on `current.mm`.
+    #[inline]
+    pub fn mmgrab_current() -> Option<Self> {
+        // SAFETY: It's safe to get the `mm` field from current.
+        let mm = unsafe {
+            let current = bindings::get_current();
+            (*current).mm
+        };
+
+        let mm = NonNull::new(mm)?;
+
+        // SAFETY: We just checked that `mm` is not null.
+        unsafe { bindings::mmgrab(mm.as_ptr()) };
+
+        // INVARIANT: We just created an `mmgrab` refcount.
+        Some(Self { mm })
+    }
+
+    /// Check whether this vma is associated with this mm.
+    #[inline]
+    pub fn is_same_mm(&self, area: &virt::Area) -> bool {
+        // SAFETY: The `vm_mm` field of the area is immutable, so we can read it without
+        // synchronization.
+        let vm_mm = unsafe { (*area.as_ptr()).vm_mm };
+
+        vm_mm == self.mm.as_ptr()
+    }
+
+    /// Calls `mmget_not_zero` and returns a handle if it succeeds.
+    #[inline]
+    pub fn mmget_not_zero(&self) -> Option<MmGet> {
+        // SAFETY: We know that `mm` is still valid since we hold an `mmgrab` refcount.
+        let success = unsafe { bindings::mmget_not_zero(self.mm.as_ptr()) };
+
+        if success {
+            Some(MmGet { mm: self.mm })
+        } else {
+            None
+        }
+    }
+}
+
+// SAFETY: It is safe to call `mmdrop` on another thread than where `mmgrab` was called.
+unsafe impl Send for MmGrab {}
+// SAFETY: All methods on this struct are safe to call in parallel from several threads.
+unsafe impl Sync for MmGrab {}
+
+impl Drop for MmGrab {
+    #[inline]
+    fn drop(&mut self) {
+        // SAFETY: This gives up an `mmgrab` refcount to a valid `struct mm_struct`.
+        // INVARIANT: We own an `mmgrab` refcount, so we can give it up.
+        unsafe { bindings::mmdrop(self.mm.as_ptr()) };
+    }
+}
+
+/// A smart pointer that references a `struct mm` and owns an `mmget` refcount.
+///
+/// Values of this type are created using [`MmGrab::mmget_not_zero`].
+///
+/// # Invariants
+///
+/// An `MmGet` owns an `mmget` refcount to the inner `struct mm_struct`.
+pub struct MmGet {
+    mm: NonNull<bindings::mm_struct>,
+}
+
+impl MmGet {
+    /// Lock the mmap read lock.
+    #[inline]
+    pub fn mmap_write_lock(&self) -> MmapWriteLock<'_> {
+        // SAFETY: The pointer is valid since we hold a refcount.
+        unsafe { bindings::mmap_write_lock(self.mm.as_ptr()) };
+
+        // INVARIANT: We just acquired the write lock, so we can transfer to this guard.
+        //
+        // The signature of this function ensures that the `MmapWriteLock` will not outlive this
+        // `mmget` refcount.
+        MmapWriteLock {
+            mm: self.mm,
+            _lifetime: PhantomData,
+        }
+    }
+
+    /// When dropping this refcount, use `mmput_async` instead of `mmput`.
+    #[inline]
+    pub fn use_async_put(self) -> MmGetAsync {
+        // Disable destructor of `self`.
+        let me = ManuallyDrop::new(self);
+
+        MmGetAsync { mm: me.mm }
+    }
+}
+
+impl Drop for MmGet {
+    #[inline]
+    fn drop(&mut self) {
+        // SAFETY: We acquired a refcount when creating this object.
+        unsafe { bindings::mmput(self.mm.as_ptr()) };
+    }
+}
+
+/// A smart pointer that references a `struct mm` and owns an `mmget` refcount, that will be
+/// dropped using `mmput_async`.
+///
+/// Values of this type are created using [`MmGet::use_async_put`].
+///
+/// # Invariants
+///
+/// An `MmGetAsync` owns an `mmget` refcount to the inner `struct mm_struct`.
+pub struct MmGetAsync {
+    mm: NonNull<bindings::mm_struct>,
+}
+
+impl MmGetAsync {
+    /// Lock the mmap write lock.
+    #[inline]
+    pub fn mmap_write_lock(&self) -> MmapWriteLock<'_> {
+        // SAFETY: The pointer is valid since we hold a refcount.
+        unsafe { bindings::mmap_write_lock(self.mm.as_ptr()) };
+
+        // INVARIANT: We just acquired the write lock, so we can transfer to this guard.
+        //
+        // The signature of this function ensures that the `MmapWriteLock` will not outlive this
+        // `mmget` refcount.
+        MmapWriteLock {
+            mm: self.mm,
+            _lifetime: PhantomData,
+        }
+    }
+
+    /// Try to lock the mmap read lock.
+    #[inline]
+    pub fn mmap_read_trylock(&self) -> Option<MmapReadLock<'_>> {
+        // SAFETY: The pointer is valid since we hold a refcount.
+        let success = unsafe { bindings::mmap_read_trylock(self.mm.as_ptr()) };
+
+        if success {
+            // INVARIANT: We just acquired the read lock, so we can transfer to this guard.
+            //
+            // The signature of this function ensures that the `MmapReadLock` will not outlive this
+            // `mmget` refcount.
+            Some(MmapReadLock {
+                mm: self.mm,
+                _lifetime: PhantomData,
+            })
+        } else {
+            None
+        }
+    }
+}
+
+impl Drop for MmGetAsync {
+    #[inline]
+    fn drop(&mut self) {
+        // SAFETY: We acquired a refcount when creating this object.
+        unsafe { bindings::mmput_async(self.mm.as_ptr()) };
+    }
+}
+
+/// A guard for the mmap read lock.
+///
+/// # Invariants
+///
+/// This `MmapReadLock` guard owns the mmap read lock. For the duration of 'a, the `mmget` refcount
+/// will remain positive.
+pub struct MmapReadLock<'a> {
+    mm: NonNull<bindings::mm_struct>,
+    _lifetime: PhantomData<&'a bindings::mm_struct>,
+}
+
+impl<'a> MmapReadLock<'a> {
+    /// Look up a vma at the given address.
+    #[inline]
+    pub fn vma_lookup(&self, vma_addr: usize) -> Option<&virt::Area> {
+        // SAFETY: The `mm` pointer is known to be valid while this read lock is held.
+        let vma = unsafe { bindings::vma_lookup(self.mm.as_ptr(), vma_addr as u64) };
+
+        if vma.is_null() {
+            None
+        } else {
+            // SAFETY: We just checked that a vma was found, so the pointer is valid. Furthermore,
+            // the returned area will borrow from this read lock guard, so it can only be used
+            // while the read lock is still held. The returned reference is immutable, so the
+            // reference cannot be used to modify the area.
+            unsafe { Some(virt::Area::from_ptr(vma)) }
+        }
+    }
+}
+
+impl Drop for MmapReadLock<'_> {
+    #[inline]
+    fn drop(&mut self) {
+        // SAFETY: We acquired the lock when creating this object.
+        unsafe { bindings::mmap_read_unlock(self.mm.as_ptr()) };
+    }
+}
+
+/// A guard for the mmap write lock.
+///
+/// # Invariants
+///
+/// This `MmapReadLock` guard owns the mmap write lock. For the duration of 'a, the `mmget` refcount
+/// will remain positive.
+pub struct MmapWriteLock<'a> {
+    mm: NonNull<bindings::mm_struct>,
+    _lifetime: PhantomData<&'a mut bindings::mm_struct>,
+}
+
+impl<'a> MmapWriteLock<'a> {
+    /// Look up a vma at the given address.
+    #[inline]
+    pub fn vma_lookup(&mut self, vma_addr: usize) -> Option<&mut virt::Area> {
+        // SAFETY: The `mm` pointer is known to be valid while this read lock is held.
+        let vma = unsafe { bindings::vma_lookup(self.mm.as_ptr(), vma_addr as u64) };
+
+        if vma.is_null() {
+            None
+        } else {
+            // SAFETY: We just checked that a vma was found, so the pointer is valid. Furthermore,
+            // the returned area will borrow from this write lock guard, so it can only be used
+            // while the write lock is still held. We hold the write lock, so mutable operations on
+            // the area are okay.
+            unsafe { Some(virt::Area::from_ptr_mut(vma)) }
+        }
+    }
+}
+
+impl Drop for MmapWriteLock<'_> {
+    #[inline]
+    fn drop(&mut self) {
+        // SAFETY: We acquired the lock when creating this object.
+        unsafe { bindings::mmap_write_unlock(self.mm.as_ptr()) };
+    }
+}
diff --git a/rust/kernel/mm/virt.rs b/rust/kernel/mm/virt.rs
new file mode 100644
index 000000000000..f004a366445a
--- /dev/null
+++ b/rust/kernel/mm/virt.rs
@@ -0,0 +1,193 @@ 
+// SPDX-License-Identifier: GPL-2.0
+
+// Copyright (C) 2024 Google LLC.
+
+//! Virtual memory.
+
+use crate::{
+    bindings,
+    error::{to_result, Result},
+    page::Page,
+    types::Opaque,
+};
+
+/// A wrapper for the kernel's `struct vm_area_struct`.
+///
+/// It represents an area of virtual memory.
+#[repr(transparent)]
+pub struct Area {
+    vma: Opaque<bindings::vm_area_struct>,
+}
+
+impl Area {
+    /// Access a virtual memory area given a raw pointer.
+    ///
+    /// # Safety
+    ///
+    /// Callers must ensure that `vma` is non-null and valid for the duration of the new area's
+    /// lifetime, with shared access. The caller must ensure that using the pointer for immutable
+    /// operations is okay.
+    #[inline]
+    pub unsafe fn from_ptr<'a>(vma: *const bindings::vm_area_struct) -> &'a Self {
+        // SAFETY: The caller ensures that the pointer is valid.
+        unsafe { &*vma.cast() }
+    }
+
+    /// Access a virtual memory area given a raw pointer.
+    ///
+    /// # Safety
+    ///
+    /// Callers must ensure that `vma` is non-null and valid for the duration of the new area's
+    /// lifetime, with exclusive access. The caller must ensure that using the pointer for
+    /// immutable and mutable operations is okay.
+    #[inline]
+    pub unsafe fn from_ptr_mut<'a>(vma: *mut bindings::vm_area_struct) -> &'a mut Self {
+        // SAFETY: The caller ensures that the pointer is valid.
+        unsafe { &mut *vma.cast() }
+    }
+
+    /// Returns a raw pointer to this area.
+    #[inline]
+    pub fn as_ptr(&self) -> *mut bindings::vm_area_struct {
+        self.vma.get()
+    }
+
+    /// Returns the flags associated with the virtual memory area.
+    ///
+    /// The possible flags are a combination of the constants in [`flags`].
+    #[inline]
+    pub fn flags(&self) -> usize {
+        // SAFETY: `self.vma` is valid by the type invariants.
+        unsafe { (*self.as_ptr()).__bindgen_anon_2.vm_flags as _ }
+    }
+
+    /// Sets the flags associated with the virtual memory area.
+    ///
+    /// The possible flags are a combination of the constants in [`flags`].
+    #[inline]
+    pub fn set_flags(&mut self, flags: usize) {
+        // SAFETY: `self.vma` is valid by the type invariants.
+        unsafe { (*self.as_ptr()).__bindgen_anon_2.vm_flags = flags as _ };
+    }
+
+    /// Returns the start address of the virtual memory area.
+    #[inline]
+    pub fn start(&self) -> usize {
+        // SAFETY: `self.vma` is valid by the type invariants.
+        unsafe { (*self.as_ptr()).__bindgen_anon_1.__bindgen_anon_1.vm_start as _ }
+    }
+
+    /// Returns the end address of the virtual memory area.
+    #[inline]
+    pub fn end(&self) -> usize {
+        // SAFETY: `self.vma` is valid by the type invariants.
+        unsafe { (*self.as_ptr()).__bindgen_anon_1.__bindgen_anon_1.vm_end as _ }
+    }
+
+    /// Make this vma anonymous.
+    #[inline]
+    pub fn set_anonymous(&mut self) {
+        // SAFETY: `self.vma` is valid by the type invariants.
+        unsafe { (*self.as_ptr()).vm_ops = core::ptr::null() };
+    }
+
+    /// Maps a single page at the given address within the virtual memory area.
+    #[inline]
+    pub fn vm_insert_page(&mut self, address: usize, page: &Page) -> Result {
+        // SAFETY: The page is guaranteed to be order 0. The range of `address` is already checked
+        // by `vm_insert_page`. `self.vma` and `page.as_ptr()` are guaranteed by their respective
+        // type invariants to be valid.
+        to_result(unsafe { bindings::vm_insert_page(self.as_ptr(), address as _, page.as_ptr()) })
+    }
+
+    /// Unmap pages in the given page range.
+    #[inline]
+    pub fn zap_page_range_single(&self, address: usize, size: usize) {
+        // SAFETY: The `vma` pointer is valid.
+        unsafe {
+            bindings::zap_page_range_single(
+                self.as_ptr(),
+                address as _,
+                size as _,
+                core::ptr::null_mut(),
+            )
+        };
+    }
+}
+
+/// Container for [`Area`] flags.
+pub mod flags {
+    use crate::bindings;
+
+    /// No flags are set.
+    pub const NONE: usize = bindings::VM_NONE as _;
+
+    /// Mapping allows reads.
+    pub const READ: usize = bindings::VM_READ as _;
+
+    /// Mapping allows writes.
+    pub const WRITE: usize = bindings::VM_WRITE as _;
+
+    /// Mapping allows execution.
+    pub const EXEC: usize = bindings::VM_EXEC as _;
+
+    /// Mapping is shared.
+    pub const SHARED: usize = bindings::VM_SHARED as _;
+
+    /// Mapping may be updated to allow reads.
+    pub const MAYREAD: usize = bindings::VM_MAYREAD as _;
+
+    /// Mapping may be updated to allow writes.
+    pub const MAYWRITE: usize = bindings::VM_MAYWRITE as _;
+
+    /// Mapping may be updated to allow execution.
+    pub const MAYEXEC: usize = bindings::VM_MAYEXEC as _;
+
+    /// Mapping may be updated to be shared.
+    pub const MAYSHARE: usize = bindings::VM_MAYSHARE as _;
+
+    /// Do not copy this vma on fork.
+    pub const DONTCOPY: usize = bindings::VM_DONTCOPY as _;
+
+    /// Cannot expand with mremap().
+    pub const DONTEXPAND: usize = bindings::VM_DONTEXPAND as _;
+
+    /// Lock the pages covered when they are faulted in.
+    pub const LOCKONFAULT: usize = bindings::VM_LOCKONFAULT as _;
+
+    /// Is a VM accounted object.
+    pub const ACCOUNT: usize = bindings::VM_ACCOUNT as _;
+
+    /// should the VM suppress accounting.
+    pub const NORESERVE: usize = bindings::VM_NORESERVE as _;
+
+    /// Huge TLB Page VM.
+    pub const HUGETLB: usize = bindings::VM_HUGETLB as _;
+
+    /// Synchronous page faults.
+    pub const SYNC: usize = bindings::VM_SYNC as _;
+
+    /// Architecture-specific flag.
+    pub const ARCH_1: usize = bindings::VM_ARCH_1 as _;
+
+    /// Wipe VMA contents in child..
+    pub const WIPEONFORK: usize = bindings::VM_WIPEONFORK as _;
+
+    /// Do not include in the core dump.
+    pub const DONTDUMP: usize = bindings::VM_DONTDUMP as _;
+
+    /// Not soft dirty clean area.
+    pub const SOFTDIRTY: usize = bindings::VM_SOFTDIRTY as _;
+
+    /// Can contain "struct page" and pure PFN pages.
+    pub const MIXEDMAP: usize = bindings::VM_MIXEDMAP as _;
+
+    /// MADV_HUGEPAGE marked this vma.
+    pub const HUGEPAGE: usize = bindings::VM_HUGEPAGE as _;
+
+    /// MADV_NOHUGEPAGE marked this vma.
+    pub const NOHUGEPAGE: usize = bindings::VM_NOHUGEPAGE as _;
+
+    /// KSM may merge identical pages.
+    pub const MERGEABLE: usize = bindings::VM_MERGEABLE as _;
+}