From patchwork Fri Nov 1 06:02:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858758 Received: from mail-qv1-f52.google.com (mail-qv1-f52.google.com [209.85.219.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 818A814B97E; Fri, 1 Nov 2024 06:04:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441056; cv=none; b=YZrr0Wo8UhJ1hPRSeo9XSSfcLVU6/eYJxhImtAt5NhMM/lGXOw4Nx0LMxiYDs9PPpsHvpySUlnGwBUQDC32fqHQdf7ESIc/6Inm9yLAU0U7C1MmSgkGiq8w3oP+NkJw7/XfD57LGBBhPaB152Flh9qQxieyys5TVMreeGT26VIc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441056; c=relaxed/simple; bh=9R5iwD1QnCK6X+FpMDpeWbrsJJEhKwil0Jn1eC/0yGw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=o2aKwWsbisElurs9bqHK7xzeOdHZGLJIzBIXm602Uq9gxhPiLhyrnKn4P4TNMK+pfzUrw2OHyGHNhAncxTnDxF2idqRDJYTVsVqDWnvOUlGJK/JHxRCsZnndnLRhh98eNu8T5mCPBvb6OoEBqqxYyfGd3hPDaaBt6AVilxOvDQc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ZLCxqVak; arc=none smtp.client-ip=209.85.219.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ZLCxqVak" Received: by mail-qv1-f52.google.com with SMTP id 6a1803df08f44-6cbe9914487so9893226d6.1; Thu, 31 Oct 2024 23:04:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441052; x=1731045852; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=uwbnorYz17/tWcZ9fit5jE8cUEZsdnUVkY29L9brcUg=; b=ZLCxqVak9We17tq+e/SiVieliqkz1qoj05AIfsRGVYsNuVChDp/sKqPf17B+fT+sj3 cM88+n3bVcBAz/EyC/dxG3piiAvhwsGjaPmLfJJUTLxbXNFAe6ktqpVdKBzTXYeCoa2+ Se0DFUuM3y7dF5CYF78Mki+JC/7SA+FvrOTwp+yLZcHnb67xubNeevPUlq5JptDt/gWY XBjZs2I/nW9Y8RAQ4iitKVdBvJMTxOlGKlMX9guILUNhe7I72hl7hPdOIkUObehPQ31V TMoTsQhUXN4DLSpKzRtG3Dof2NPb2qgy4XnuT+Z+dOixsKQ1FV4aloxs5nmtuB0/YlnC 63jg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441052; x=1731045852; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=uwbnorYz17/tWcZ9fit5jE8cUEZsdnUVkY29L9brcUg=; b=nCX6ricjdNQ33oQ3LLEQ9QjBAP5XaleFmwDIBkDxgVNbV1/bxE9Pf1jq8wCNYV/YD4 z0mYZmHtSc2cqCXMJZ5FBKBU12jL3DCsrBB+1stMjBIzVzkgGTXTz1VmxsTaL1AsB/Is 05m8bcKmJMpjr1TtwRnREFxHwpbmXB/BcpkaQhmrl2M3gK0LPrdIgffFvt5y6/7bIN6J VRYq5zYAUcdrMRluLHiBwKD/AstNomrBxYq6kexhyu+ximYjdcJg/GcaTWSJOMXu57BC eNjXKuI8/qVZ/S8OJ6PzRfRmFR7WejVHZha/GW38p04rPUnopPeN5LI2QDRZa9RiD+zi UEGA== X-Forwarded-Encrypted: i=1; AJvYcCUp5MxxRrddLLEAonKX3FyPPGfKMBttw8qDebBOFY426+A6LWnVjrKH8aGfFaAaPPNzPaVFdtPsfSAg@vger.kernel.org, AJvYcCVqhFVpDd7cwz9Ems9ClHZuQ4P7BfKASPvknZb2JPsyDGAla34M1ZY7WIcRMtvxQh8+52b1@vger.kernel.org, AJvYcCXGboQGvXwT7OxVPBa6C8rkaG+wn5dqwGyDxmGrEARwfrWExlzsO9/d01OcfNbzJdYzu2kPbwFJZakbUrNj5g==@vger.kernel.org, AJvYcCXRYSkiEn46MgFeVRCGxiT3rt8tsuFYo83Ug/l0XWlrBvxKX7m4vavQIejR3tYYksPT3bjNsX7zax0TVIPX@vger.kernel.org X-Gm-Message-State: AOJu0YzuGvWJFkK5yCCa1+doLBxsn5YOPkYM8SJKgUcrjVh2BLU8Tgva 4cKqaGYSrHiqByvQUhgIt7lpatULepJsfhvPdv5t3654kHKRar8srgzx3Su1 X-Google-Smtp-Source: AGHT+IGyexyR6AoXScTmpu+cSUhYKvGGsYFfD8Tujr/9kMmwHLGzjdJ0dv6AY+zIEczronoUOfa9LA== X-Received: by 2002:a05:6214:4306:b0:6ce:26d0:c7bd with SMTP id 6a1803df08f44-6d351b1ec85mr77840086d6.40.1730441052225; Thu, 31 Oct 2024 23:04:12 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6d354178f8asm15836056d6.119.2024.10.31.23.04.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:04:11 -0700 (PDT) Received: from phl-compute-04.internal (phl-compute-04.phl.internal [10.202.2.44]) by mailfauth.phl.internal (Postfix) with ESMTP id EB6831200043; Fri, 1 Nov 2024 02:04:10 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-04.internal (MEProxy); Fri, 01 Nov 2024 02:04:10 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnegoufhushhpvggtthffohhmrghinhculdegledmnecujfgurhep hffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnh hguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhn pedutedvgfetleeiffeihfetgfeiheetueefhedukedvveejuddvheeujeehuefgteenuc ffohhmrghinheptghrrghtvghsrdhiohdpiihulhhiphgthhgrthdrtghomhdpghhithhh uhgsrdgtohhmpdhkrghnghhrvghjohhsrdgtohhmpdhgihhthhhusgdrihhonecuvehluh hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqhhunhdomhgv shhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddujeejkeehhe ehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvgdrnhgrmhgv pdhnsggprhgtphhtthhopeehjedpmhhouggvpehsmhhtphhouhhtpdhrtghpthhtoheprh hushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthht oheprhgtuhesvhhgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehlihhnuhigqd hkvghrnhgvlhesvhhgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehlihhnuhig qdgrrhgthhesvhhgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehllhhvmheslh hishhtshdrlhhinhhugidruggvvhdprhgtphhtthhopehlkhhmmheslhhishhtshdrlhhi nhhugidruggvvhdprhgtphhtthhopehojhgvuggrsehkvghrnhgvlhdrohhrghdprhgtph htthhopegrlhgvgidrghgrhihnohhrsehgmhgrihhlrdgtohhmpdhrtghpthhtohepfigv ughsohhnrghfsehgmhgrihhlrdgtohhm X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:04:10 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 13/13] rust: sync: rcu: Add RCU protected pointer Date: Thu, 31 Oct 2024 23:02:36 -0700 Message-ID: <20241101060237.1185533-14-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 RCU protected pointers are an atomic pointer that can be loaded and dereferenced by mulitple RCU readers, but only one updater/writer can change the value (following a read-copy-update pattern usually). This is useful in the case where data is read-mostly. The rationale of this patch is to provide a proof of concept on how RCU should be exposed to the Rust world, and it also serves as an example for atomic usage. Similar mechanisms like ArcSwap [1] are already widely used. Provide a `Rcu

` type with an atomic pointer implementation. `P` has to be a `ForeignOwnable`, which means the ownership of a object can be represented by a pointer-size value. `Rcu::dereference()` requires a RCU Guard, which means dereferencing is only valid under RCU read lock protection. `Rcu::read_copy_update()` is the operation for updaters, it requries a `Pin<&mut Self>` for exclusive accesses, since RCU updaters are normally exclusive with each other. A lot of RCU functionalities including asynchronously free (call_rcu() and kfree_rcu()) are still missing, and will be the future work. Also, we still need language changes like field projection [2] to provide better ergonomic. Acknowledgment: this work is based on a lot of productive discussions and hard work from others, these are the ones I can remember (sorry if I forgot your contribution): * Wedson started the work on RCU field projection and Benno followed it up and had been working on it as a more general language feature. Also, Gary's field-projection repo [3] has been used as an example for related discussions. * During Kangrejos 2023 [4], Gary, Benno and Alice provided a lot of feedbacks on the talk from Paul and me: "If you want to use RCU in Rust for Linux kernel..." * During a recent discussion among Benno, Paul and me, Benno suggested using `Pin<&mut>` to guarantee the exclusive access on updater operations. Link: https://crates.io/crates/arc-swap [1] Link: https://rust-lang.zulipchat.com/#narrow/channel/213817-t-lang/topic/Field.20Projections/near/474648059 [2] Link: https://github.com/nbdd0121/field-projection [3] Link: https://kangrejos.com/2023 [4] Signed-off-by: Boqun Feng --- rust/kernel/sync/rcu.rs | 269 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 268 insertions(+), 1 deletion(-) diff --git a/rust/kernel/sync/rcu.rs b/rust/kernel/sync/rcu.rs index 5a35495f69a4..8326b2e0986a 100644 --- a/rust/kernel/sync/rcu.rs +++ b/rust/kernel/sync/rcu.rs @@ -5,7 +5,11 @@ //! C header: [`include/linux/rcupdate.h`](srctree/include/linux/rcupdate.h) use crate::bindings; -use core::marker::PhantomData; +use crate::{ + sync::atomic::{Atomic, Relaxed, Release}, + types::ForeignOwnable, +}; +use core::{marker::PhantomData, pin::Pin, ptr::NonNull}; /// Evidence that the RCU read side lock is held on the current thread/CPU. /// @@ -50,3 +54,266 @@ fn drop(&mut self) { pub fn read_lock() -> Guard { Guard::new() } + +/// An RCU protected pointer, the pointed object is protected by RCU. +/// +/// # Invariants +/// +/// Either the pointer is null, or it points to a return value of [`P::into_foreign`] and the atomic +/// variable exclusively owns the pointer. +pub struct Rcu(Atomic<*mut core::ffi::c_void>, PhantomData

); + +/// A pointer that has been unpublished, but hasn't waited for a grace period yet. +/// +/// The pointed object may still have an existing RCU reader. Therefore a grace period is needed to +/// free the object. +/// +/// # Invariants +/// +/// The pointer has to be a return value of [`P::into_foreign`] and [`Self`] exclusively owns the +/// pointer. +pub struct RcuOld(NonNull, PhantomData

); + +impl Drop for RcuOld

{ + fn drop(&mut self) { + // SAFETY: As long as called in a sleepable context, which should be checked by klint, + // `synchronize_rcu()` is safe to call. + unsafe { + bindings::synchronize_rcu(); + } + + // SAFETY: `self.0` is a return value of `P::into_foreign()`, so it's safe to call + // `from_foreign()` on it. Plus, the above `synchronize_rcu()` guarantees no existing + // `ForeignOwnable::borrow()` anymore. + let p: P = unsafe { P::from_foreign(self.0.as_ptr()) }; + drop(p); + } +} + +impl Rcu

{ + /// Creates a new RCU pointer. + pub fn new(p: P) -> Self { + // INVARIANTS: The return value of `p.into_foreign()` is directly stored in the atomic + // variable. + Self(Atomic::new(p.into_foreign().cast_mut()), PhantomData) + } + + /// Dereferences the protected object. + /// + /// Returns `Some(b)`, where `b` is a reference-like borrowed type, if the pointer is not null, + /// otherwise returns `None`. + /// + /// # Examples + /// + /// ```rust + /// # use kernel::alloc::{flags, KBox}; + /// use kernel::sync::rcu::{self, Rcu}; + /// + /// let x = Rcu::new(KBox::new(100i32, flags::GFP_KERNEL)?); + /// + /// let g = rcu::read_lock(); + /// // Read in under RCU read lock protection. + /// let v = x.dereference(&g); + /// + /// assert_eq!(v, Some(&100i32)); + /// + /// # Ok::<(), Error>(()) + /// ``` + /// + /// Note the borrowed access can outlive the reference of the [`Rcu

`], this is because as + /// long as the RCU read lock is held, the pointed object should remain valid. + /// + /// In the following case, the main thread is responsible for the ownership of `shared`, i.e. it + /// will drop it eventually, and a work item can temporarily access the `shared` via `cloned`, + /// but the use of the dereferenced object doesn't depend on `cloned`'s existence. + /// + /// ```rust + /// # use kernel::alloc::{flags, KBox}; + /// # use kernel::workqueue::system; + /// # use kernel::sync::{Arc, atomic::{Atomic, Acquire, Release}}; + /// use kernel::sync::rcu::{self, Rcu}; + /// + /// struct Config { + /// a: i32, + /// b: i32, + /// c: i32, + /// } + /// + /// let config = KBox::new(Config { a: 1, b: 2, c: 3 }, flags::GFP_KERNEL)?; + /// + /// let shared = Arc::new(Rcu::new(config), flags::GFP_KERNEL)?; + /// let cloned = shared.clone(); + /// + /// // Use atomic to simulate a special refcounting. + /// static FLAG: Atomic = Atomic::new(0); + /// + /// system().try_spawn(flags::GFP_KERNEL, move || { + /// let g = rcu::read_lock(); + /// let v = cloned.dereference(&g).unwrap(); + /// drop(cloned); // release reference to `shared`. + /// FLAG.store(1, Release); + /// + /// // but still need to access `v`. + /// assert_eq!(v.a, 1); + /// drop(g); + /// }); + /// + /// // Wait until `cloned` dropped. + /// while FLAG.load(Acquire) == 0 { + /// // SAFETY: Sleep should be safe. + /// unsafe { kernel::bindings::schedule(); } + /// } + /// + /// drop(shared); + /// + /// # Ok::<(), Error>(()) + /// ``` + pub fn dereference<'rcu>(&self, _rcu_guard: &'rcu Guard) -> Option> { + // Ordering: Address dependency pairs with the `store(Release)` in read_copy_update(). + let ptr = self.0.load(Relaxed); + + if !ptr.is_null() { + // SAFETY: + // - Since `ptr` is not null, so it has to be a return value of `P::into_foreign()`. + // - The returned `Borrowed<'rcu>` cannot outlive the RCU Guar, this guarantees the + // return value will only be used under RCU read lock, and the RCU read lock prevents + // the pass of a grace period that the drop of `RcuOld` or `Rcu` is waiting for, + // therefore no `from_foreign()` will be called for `ptr` as long as `Borrowed` exists. + // + // CPU 0 CPU 1 + // ===== ===== + // { `x` is a reference to Rcu> } + // let g = rcu::read_lock(); + // + // if let Some(b) = x.dereference(&g) { + // // drop(g); cannot be done, since `b` is still alive. + // + // if let Some(old) = x.replace(...) { + // // `x` is null now. + // println!("{}", b); + // } + // drop(old): + // synchronize_rcu(); + // drop(g); + // // a grace period passed. + // // No `Borrowed` exists now. + // from_foreign(...); + // } + Some(unsafe { P::borrow(ptr) }) + } else { + None + } + } + + /// Read, copy and update the pointer with new value. + /// + /// Returns `None` if the pointer's old value is null, otherwise returns `Some(old)`, where old + /// is a [`RcuOld`] which can be used to free the old object eventually. + /// + /// The `Pin<&mut Self>` is needed because this function needs the exclusive access to + /// [`Rcu

`], otherwise two `read_copy_update()`s may get the same old object and double free. + /// Using `Pin<&mut Self>` provides the exclusive access that C side requires with the type + /// system checking. + /// + /// Also this has to be `Pin` because a `&mut Self` may allow users to `swap()` safely, that + /// will break the atomicity. A [`Rcu

`] should be structurally pinned in the struct that + /// contains it. + /// + /// Note that `Pin<&mut Self>` cannot assume noalias here because [`Atomic`] is a + /// [`Opaque`] which has the same effect on aliasing rules as [`UnsafePinned`]. + /// + /// [`UnsafePinned`]: https://rust-lang.github.io/rfcs/3467-unsafe-pinned.html + pub fn read_copy_update(self: Pin<&mut Self>, f: F) -> Option> + where + F: FnOnce(Option>) -> Option

, + { + // step 1: READ. + // Ordering: Address dependency pairs with the `store(Release)` in read_copy_update(). + let old_ptr = NonNull::new(self.0.load(Relaxed)); + + let old = old_ptr.map(|nonnull| { + // SAFETY: Per type invariants `old_ptr` has to be a value return by a previous + // `into_foreign()`, and the exclusive reference `self` guarantees that `from_foreign()` + // has not been called. + unsafe { P::borrow(nonnull.as_ptr()) } + }); + + // step 2: COPY, or more generally, initializing `new` based on `old`. + let new = f(old); + + // step 3: UPDATE. + if let Some(new) = new { + let new_ptr = new.into_foreign().cast_mut(); + // Ordering: Pairs with the address dependency in `dereference()` and + // `read_copy_update()`. + // INVARIANTS: `new.into_foreign()` is directly store into the atomic variable. + self.0.store(new_ptr, Release); + } else { + // Ordering: Setting to a null pointer doesn't need to be Release. + // INVARIANTS: The atomic variable is set to be null. + self.0.store(core::ptr::null_mut(), Relaxed); + } + + // INVARIANTS: The exclusive reference guarantess that the ownership of a previous + // `into_foreign()` transferred to the `RcuOld`. + Some(RcuOld(old_ptr?, PhantomData)) + } + + /// Replaces the pointer with new value. + /// + /// Returns `None` if the pointer's old value is null, otherwise returns `Some(old)`, where old + /// is a [`RcuOld`] which can be used to free the old object eventually. + /// + /// # Examples + /// + /// ```rust + /// use core::pin::pin; + /// # use kernel::alloc::{flags, KBox}; + /// use kernel::sync::rcu::{self, Rcu}; + /// + /// let mut x = pin!(Rcu::new(KBox::new(100i32, flags::GFP_KERNEL)?)); + /// let q = KBox::new(101i32, flags::GFP_KERNEL)?; + /// + /// // Read in under RCU read lock protection. + /// let g = rcu::read_lock(); + /// let v = x.dereference(&g); + /// + /// // Replace with a new object. + /// let old = x.as_mut().replace(q); + /// + /// assert!(old.is_some()); + /// + /// // `v` should still read the old value. + /// assert_eq!(v, Some(&100i32)); + /// + /// // New readers should get the new value. + /// assert_eq!(x.dereference(&g), Some(&101i32)); + /// + /// drop(g); + /// + /// // Can free the object outside the read-side critical section. + /// drop(old); + /// # Ok::<(), Error>(()) + /// ``` + pub fn replace(self: Pin<&mut Self>, new: P) -> Option> { + self.read_copy_update(|_| Some(new)) + } +} + +impl Drop for Rcu

{ + fn drop(&mut self) { + let ptr = *self.0.get_mut(); + if !ptr.is_null() { + // SAFETY: As long as called in a sleepable context, which should be checked by klint, + // `synchronize_rcu()` is safe to call. + unsafe { + bindings::synchronize_rcu(); + } + + // SAFETY: `self.0` is a return value of `P::into_foreign()`, so it's safe to call + // `from_foreign()` on it. Plus, the above `synchronize_rcu()` guarantees no existing + // `ForeignOwnable::borrow()` anymore. + drop(unsafe { P::from_foreign(ptr) }); + } + } +}