From patchwork Fri Nov 1 06:02:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858749 Received: from mail-qt1-f179.google.com (mail-qt1-f179.google.com [209.85.160.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C441114EC77; Fri, 1 Nov 2024 06:03:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441042; cv=none; b=iWMr804rRaA07T1el9YywWiKLkTeAH3Zph8mEQS2NAAgM2sek/pI8OiB0rdm9zlFr05OzUvIr6NF5EAE/uWDRjH6QvY3SaYvsHEWy9B7TcV41I3ah1XSIFG6xUVI7KQJSifVBfXPW+UD4ymDRc+hL6Nnyb9ASruXUNjjCOfDeNk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441042; c=relaxed/simple; bh=OpSA2EKjuTJFsKfFeDiLJco+R1AXhNhVS1oLsxyzDcE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sv2HJ+qV1bYAfW++0LydFZ21LgWOmFTBQPdhgBT22RVdHcu3akJPGp6uP3qTFDPA/Y1B4e5VU2JpfSbxySdHLW+F70vpHUbhdhPjrmHrk5AR9az21Lz8jIg0BRIlI9NAnEDc7wXTCHlXG/2bwY/6mW5gBN1KhBb+Zhjp8SktGic= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=aOTLd2fM; arc=none smtp.client-ip=209.85.160.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="aOTLd2fM" Received: by mail-qt1-f179.google.com with SMTP id d75a77b69052e-460d2571033so10823811cf.1; Thu, 31 Oct 2024 23:03:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441039; x=1731045839; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=YIkDBqs1Qg0O/PRjCDXbFtqVlNEc14v8sJMt4WTa4Dc=; b=aOTLd2fMCS1/jksgGLI5FaTwGYoNhn05l0CoZgzdBkADRf6buKKcQ4E5qjdWT1HY8S xPo55xCgp2YtMVUsZi7klus5btoHwN7ncCuH2LPwjC3TZfw+ya0OESor8Mu4Ukl/WAwE PNCKjEOIpxcL2YYHPhdkIFMBf5z3ORjeM3SufBY8RJa1YNhxgSOVqEQjBhYmTABuR+2C 2Mq9KbiQpQ/FfmqZumhCOMZTI2I7E+Uo5OTcLg2THpX8/OGMduksGJIpMfuB9hq87GJc 2eQiRjLkrHjKPNndIAtJxxzdWupw62ic+XX/7/Dv6whuSw8g69JyhZpBZq7SSijYiMAV 34sA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441039; x=1731045839; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=YIkDBqs1Qg0O/PRjCDXbFtqVlNEc14v8sJMt4WTa4Dc=; b=TS0/y1rtwoCTNKW6xh2EYV7ioyus88h6csPG9aXYKC10EL8tyeY8a1BdLl0G5zxj5A u5dw9m0jI4z2pZgY4avEpzCXvredoYjRz1Evr6WVl4Fji4t2fgBJcIM3mV1V3DuKT603 VMEWWOPzatsQfVl65HFC3GJa8LE5gqZpK29DHT2aDMXWt54Hwhi7tsxJf/3XxsNhXcgG HhpvmpbwagYkgUVeFrpqbwqzI4h1pdPiWvYjsLOdoJK2Ouz6gJ58hqOH/udx7zEtW2cP kj6AUfdyadGP4mbC/LAhy+GxK1qWk/Atc0U2q6zhAadzJzc0r91RZjiTp9oyxmjNJ7Jt 3LfQ== X-Forwarded-Encrypted: i=1; AJvYcCU46f6c3ehjcYdX1h9P6x9JyUostmlGOTSwtsYkOg91rFHNspt3dQlK05Z3DOZctuETXUrZ@vger.kernel.org, AJvYcCXDNlB460/DfoW3ahloyeK5khCIphes1cNR19LJ5VvOqgwLylwuqYmAUJ69b6tkzy6lhDikiFyGskWTv7+a9A==@vger.kernel.org, AJvYcCXDfc5zTnKPmJwC6cxTe9Ow04tSXfZNxFd4l83d5uuX+sz03YUFzdWdQRnbBG1Xw62qYSiKf39IsjJd@vger.kernel.org, AJvYcCXOEMxDICg0hxVtXtSth7/oWh2y63n+tPo5Pe2tqnWidLz4kiRFrf/gXyYaJZM6ZyKJHg8BZVwtiD2tJ/+C@vger.kernel.org X-Gm-Message-State: AOJu0YwaY8GiFkJkxjNbM1wNjlqly4pzOoQxtOX3xdyjDP5A/pv2hbEb wqRr3g3ovdO2E+d05FWGRbJsQCAWlViylFjuLV6dxD+qxLPEtZyb1YvNjUbS X-Google-Smtp-Source: AGHT+IE9q3RycECIgIUdPyowNi8R6JIJY/RG9vuOW1JdKHsIaeAIDMOhejLwyBnLfIfnU1xmsVsfSQ== X-Received: by 2002:a05:622a:ca:b0:460:8f81:8c9a with SMTP id d75a77b69052e-4613c1e3fb0mr298778671cf.60.1730441038443; Thu, 31 Oct 2024 23:03:58 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-462ad086d81sm15237321cf.17.2024.10.31.23.03.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:03:58 -0700 (PDT) Received: from phl-compute-08.internal (phl-compute-08.phl.internal [10.202.2.48]) by mailfauth.phl.internal (Postfix) with ESMTP id B039A1200043; Fri, 1 Nov 2024 02:03:57 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-08.internal (MEProxy); Fri, 01 Nov 2024 02:03:57 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:03:57 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 04/13] rust: sync: atomic: Add generic atomics Date: Thu, 31 Oct 2024 23:02:27 -0700 Message-ID: <20241101060237.1185533-5-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 To provide using LKMM atomics for Rust code, a generic `Atomic` is added, currently `T` needs to be Send + Copy because these are the straightforward usages and all basic types support this. The trait `AllowAtomic` should be only ipmlemented inside atomic mod until the generic atomic framework is mature enough (unless the ipmlementer is a `#[repr(transparent)]` new type). `AtomicIpml` types are automatically `AllowAtomic`, and so far only basic operations load() and store() are introduced. Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 2 + rust/kernel/sync/atomic/generic.rs | 253 +++++++++++++++++++++++++++++ 2 files changed, 255 insertions(+) create mode 100644 rust/kernel/sync/atomic/generic.rs diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index be2e8583595f..b791abc59b61 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -16,7 +16,9 @@ //! //! [`LKMM`]: srctree/tools/memory-mode/ +pub mod generic; pub mod ops; pub mod ordering; +pub use generic::Atomic; pub use ordering::{Acquire, Full, Relaxed, Release}; diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs new file mode 100644 index 000000000000..204da38e2691 --- /dev/null +++ b/rust/kernel/sync/atomic/generic.rs @@ -0,0 +1,253 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Generic atomic primitives. + +use super::ops::*; +use super::ordering::*; +use crate::types::Opaque; + +/// A generic atomic variable. +/// +/// `T` must impl [`AllowAtomic`], that is, an [`AtomicImpl`] has to be chosen. +/// +/// # Invariants +/// +/// Doing an atomic operation while holding a reference of [`Self`] won't cause a data race, this +/// is guaranteed by the safety requirement of [`Self::from_ptr`] and the extra safety requirement +/// of the usage on pointers returned by [`Self::as_ptr`]. +#[repr(transparent)] +pub struct Atomic(Opaque); + +// SAFETY: `Atomic` is safe to share among execution contexts because all accesses are atomic. +unsafe impl Sync for Atomic {} + +/// Atomics that support basic atomic operations. +/// +/// TODO: Unless the `impl` is a `#[repr(transparet)]` new type of an existing [`AllowAtomic`], the +/// impl block should be only done in atomic mod. And currently only basic integer types can +/// implement this trait in atomic mod. +/// +/// # Safety +/// +/// [`Self`] must have the same size and alignment as [`Self::Repr`]. +pub unsafe trait AllowAtomic: Sized + Send + Copy { + /// The backing atomic implementation type. + type Repr: AtomicImpl; + + /// Converts into a [`Self::Repr`]. + fn into_repr(self) -> Self::Repr; + + /// Converts from a [`Self::Repr`]. + fn from_repr(repr: Self::Repr) -> Self; +} + +// SAFETY: `T::Repr` is `Self` (i.e. `T`), so they have the same size and alignment. +unsafe impl AllowAtomic for T { + type Repr = Self; + + fn into_repr(self) -> Self::Repr { + self + } + + fn from_repr(repr: Self::Repr) -> Self { + repr + } +} + +impl Atomic { + /// Creates a new atomic. + pub const fn new(v: T) -> Self { + Self(Opaque::new(v)) + } + + /// Creates a reference to [`Self`] from a pointer. + /// + /// # Safety + /// + /// - `ptr` has to be a valid pointer. + /// - `ptr` has to be valid for both reads and writes for the whole lifetime `'a`. + /// - For the whole lifetime of '`a`, other accesses to the object cannot cause data races + /// (defined by [`LKMM`]) against atomic operations on the returned reference. + /// + /// [`LKMM`]: srctree/tools/memory-model + /// + /// # Examples + /// + /// Using [`Atomic::from_ptr()`] combined with [`Atomic::load()`] or [`Atomic::store()`] can + /// achieve the same functionality as `READ_ONCE()`/`smp_load_acquire()` or + /// `WRITE_ONCE()`/`smp_store_release()` in C side: + /// + /// ```rust + /// # use kernel::types::Opaque; + /// use kernel::sync::atomic::{Atomic, Relaxed, Release}; + /// + /// // Assume there is a C struct `Foo`. + /// mod cbindings { + /// #[repr(C)] + /// pub(crate) struct foo { pub(crate) a: i32, pub(crate) b: i32 } + /// } + /// + /// let tmp = Opaque::new(cbindings::foo { a: 1, b: 2}); + /// + /// // struct foo *foo_ptr = ..; + /// let foo_ptr = tmp.get(); + /// + /// // SAFETY: `foo_ptr` is a valid pointer, and `.a` is inbound. + /// let foo_a_ptr = unsafe { core::ptr::addr_of_mut!((*foo_ptr).a) }; + /// + /// // a = READ_ONCE(foo_ptr->a); + /// // + /// // SAFETY: `foo_a_ptr` is a valid pointer for read, and all accesses on it is atomic, so no + /// // data race. + /// let a = unsafe { Atomic::from_ptr(foo_a_ptr) }.load(Relaxed); + /// # assert_eq!(a, 1); + /// + /// // smp_store_release(&foo_ptr->a, 2); + /// // + /// // SAFETY: `foo_a_ptr` is a valid pointer for write, and all accesses on it is atomic, so no + /// // data race. + /// unsafe { Atomic::from_ptr(foo_a_ptr) }.store(2, Release); + /// ``` + /// + /// However, this should be only used when communicating with C side or manipulating a C struct. + pub unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a Self + where + T: Sync, + { + // CAST: `T` is transparent to `Atomic`. + // SAFETY: Per function safety requirement, `ptr` is a valid pointer and the object will + // live long enough. It's safe to return a `&Atomic` because function safety requirement + // guarantees other accesses won't cause data races. + unsafe { &*ptr.cast::() } + } + + /// Returns a pointer to the underlying atomic variable. + /// + /// Extra safety requirement on using the return pointer: the operations done via the pointer + /// cannot cause data races defined by [`LKMM`]. + /// + /// [`LKMM`]: srctree/tools/memory-model + pub const fn as_ptr(&self) -> *mut T { + self.0.get() + } + + /// Returns a mutable reference to the underlying atomic variable. + /// + /// This is safe because the mutable reference of the atomic variable guarantees the exclusive + /// access. + pub fn get_mut(&mut self) -> &mut T { + // SAFETY: `self.as_ptr()` is a valid pointer to `T`, and the object has already been + // initialized. `&mut self` guarantees the exclusive access, so it's safe to reborrow + // mutably. + unsafe { &mut *self.as_ptr() } + } +} + +impl Atomic +where + T::Repr: AtomicHasBasicOps, +{ + /// Loads the value from the atomic variable. + /// + /// # Examples + /// + /// Simple usages: + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x = Atomic::new(42i32); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// let x = Atomic::new(42i64); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// ``` + /// + /// Customized new types in [`Atomic`]: + /// + /// ```rust + /// use kernel::sync::atomic::{generic::AllowAtomic, Atomic, Relaxed}; + /// + /// #[derive(Clone, Copy)] + /// #[repr(transparent)] + /// struct NewType(u32); + /// + /// // SAFETY: `NewType` is transparent to `u32`, which has the same size and alignment as + /// // `i32`. + /// unsafe impl AllowAtomic for NewType { + /// type Repr = i32; + /// + /// fn into_repr(self) -> Self::Repr { + /// self.0 as i32 + /// } + /// + /// fn from_repr(repr: Self::Repr) -> Self { + /// NewType(repr as u32) + /// } + /// } + /// + /// let n = Atomic::new(NewType(0)); + /// + /// assert_eq!(0, n.load(Relaxed).0); + /// ``` + #[inline(always)] + pub fn load(&self, _: Ordering) -> T { + let a = self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_read*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer, + // - per the type invariants, the following atomic operation won't cause data races. + // - For extra safety requirement of usage on pointers returned by `self.as_ptr(): + // - atomic operations are used here. + let v = unsafe { + if Ordering::IS_RELAXED { + T::Repr::atomic_read(a) + } else { + T::Repr::atomic_read_acquire(a) + } + }; + + T::from_repr(v) + } + + /// Stores a value to the atomic variable. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x = Atomic::new(42i32); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// x.store(43, Relaxed); + /// + /// assert_eq!(43, x.load(Relaxed)); + /// ``` + /// + #[inline(always)] + pub fn store(&self, v: T, _: Ordering) { + let v = T::into_repr(v); + let a = self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_set*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer, + // - per the type invariants, the following atomic operation won't cause data races. + // - For extra safety requirement of usage on pointers returned by `self.as_ptr(): + // - atomic operations are used here. + unsafe { + if Ordering::IS_RELAXED { + T::Repr::atomic_set(a, v) + } else { + T::Repr::atomic_set_release(a, v) + } + }; + } +}