From patchwork Fri Nov 1 06:02:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858732 Received: from mail-vs1-f49.google.com (mail-vs1-f49.google.com [209.85.217.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B075D1482E5; Fri, 1 Nov 2024 06:03:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.217.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441039; cv=none; b=iIDZkWaGOk/pWvMzopt+BWBnTXdmHf4flP1zTi6xQrkXIO2f53OOHR48NXPJ3PyK4eAtWS+Wj8CBpcTwaN0D6ox4TSbXpFaQgt4R6XkQchglNYCTcl8iG76OuaxxGJIEq7VxBmHdjclO4xX2tQ08Y4K2KA+Cl1woyghkrkfYNEQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441039; c=relaxed/simple; bh=jyS5VwzdGCLjR/xMEcdsD/yPiVHjnDyHCKqC9tkwsxA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nhnRdiax6dSkZaLHmNWaPkphdzb+DIsC66hLMz3mnkAAdPMJma6EmNJMxbTQWUhC2LZvmm4L5TFz6o0p5JN4tQ+UHPd7Mx5assq8dEB4o7ON1jSnJ6d6ii0JUkDJsKWko0+QJj/EwOQfrLAwlFERQKZtdF36dPW3D2wIYcY3xLg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=BL12CdSK; arc=none smtp.client-ip=209.85.217.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BL12CdSK" Received: by mail-vs1-f49.google.com with SMTP id ada2fe7eead31-4a46d662fccso537252137.2; Thu, 31 Oct 2024 23:03:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441034; x=1731045834; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=zmn4qBYUZeQNSTfigZ1U+mXcshXylPzTdWvXiIW+UB4=; b=BL12CdSK/0CJYoFYfAOlThPH/MuEjsezh8gfUcl6hmwwgmi6xqaE/YHFATeY3MMAXg 97WGwsHB+U3H+NbsrmthAEfGVpRkWDGBGU3nALewaWWs0Jly9PXmJP8ez//60OUV4u+b LcaKp1ehv3iDUzfirLlc9kZqwTIgCm6NcJB5cc68rC/HcLB/1QHVt9crkbwT13eo19/G fGj9DmgQSYTjsByvqYGk7oSmUviYew+WVHlIoH6/mZvZo5iD3AQv4OyFvxVUwgMGRWjT XKdI984zB3K28+6SJWraCKuJOxFvdaFALxNTK/wf1EBnRAXi5l9t7gc94VlMJEho8JaC HKcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441034; x=1731045834; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=zmn4qBYUZeQNSTfigZ1U+mXcshXylPzTdWvXiIW+UB4=; b=YiZ8de3lT1HVOuLolAdLZJydXQcLsifVBh/yEplJOIiA29GVyiA5MGWIb/YERaPcCI 2kUeGy199XyjSnLdi2hdmvxXoP0+2dLjx5XCf/eq4BM6JydjH7nwwAeRCFJHFoWQ7UhB OOmb1tJ5XEqwx1UwV+kFc+EkcFqRfATr0lNTHqwMCiOIuwwnj9YPeNgg8woILWEhCMgU HPzvkSxIa2JxNUdMcrpQQg1ZaMiQ9mNtRjsSSMaeniACAMoiieOwo0is28avWJuUOjLM TPPS0O1JVa40NCu0Y1Swpf6BjjxyhQUvCbb+Tebhtn9SJcUe9OoTWsZheLwjY951HYTA phNg== X-Forwarded-Encrypted: i=1; AJvYcCV/MIqYYjoqo7BlvIPKr+mraX3LEwZjg05DdV079E02NNmI3LfmJ/G96VF7TrU9nwukqP2/M4vxrSbm@vger.kernel.org, AJvYcCVoiO77bhZ/thYYsCPHOZU5bZIsgTjIUz6b8zJqRXzJxK8o1hrZR0zRXA45rgC5OuBPFNtIB6L3NOdeq6oa@vger.kernel.org, AJvYcCWi/1bYAe0R9NERMjfxNDR+abVGAEd54+k8uAbImwGDYM7F3MhPhlwLzTCt1kOX0wMMgpwg@vger.kernel.org, AJvYcCXSOUTrttm5T+YPiuvgrWnvzjVC43VbrF5mDhhcDZqFBTVpsSn9XarISnMEiN7GNvz9Yyl8ESFVAVwQZ7zkSA==@vger.kernel.org X-Gm-Message-State: AOJu0YzcYuzQgEF1eSGkIgu5trLyS9KSexNh/stCJxDkff8BlnluALPv siourChERgMZOsqqzZlB3VZ1WT/gaABGa/KAZg9v45zAlMZgiyWuFESgtlhb X-Google-Smtp-Source: AGHT+IH1ehfX4J2LfcG2DVxEh8G3W0EfnNlbO/vhMZZ7hCpQucs6/s1DdZj+eC5BsDIaVvX8xVvPQw== X-Received: by 2002:a05:6102:38c9:b0:4a4:8165:f030 with SMTP id ada2fe7eead31-4a8cfd536e2mr22853136137.21.1730441034284; Thu, 31 Oct 2024 23:03:54 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-462ad16a590sm15104101cf.75.2024.10.31.23.03.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:03:53 -0700 (PDT) Received: from phl-compute-12.internal (phl-compute-12.phl.internal [10.202.2.52]) by mailfauth.phl.internal (Postfix) with ESMTP id 59F251200066; Fri, 1 Nov 2024 02:03:53 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-12.internal (MEProxy); Fri, 01 Nov 2024 02:03:53 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:03:52 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 01/13] rust: Introduce atomic API helpers Date: Thu, 31 Oct 2024 23:02:24 -0700 Message-ID: <20241101060237.1185533-2-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In order to support LKMM atomics in Rust, add rust_helper_* for atomic APIs. These helpers ensure the implementation of LKMM atomics in Rust is the same as in C. This could save the maintenance burden of having two similar atomic implementations in asm. Originally-by: Mark Rutland Signed-off-by: Boqun Feng --- rust/helpers/atomic.c | 1038 +++++++++++++++++++++ rust/helpers/helpers.c | 1 + scripts/atomic/gen-atomics.sh | 1 + scripts/atomic/gen-rust-atomic-helpers.sh | 65 ++ 4 files changed, 1105 insertions(+) create mode 100644 rust/helpers/atomic.c create mode 100755 scripts/atomic/gen-rust-atomic-helpers.sh diff --git a/rust/helpers/atomic.c b/rust/helpers/atomic.c new file mode 100644 index 000000000000..00bf10887928 --- /dev/null +++ b/rust/helpers/atomic.c @@ -0,0 +1,1038 @@ +// SPDX-License-Identifier: GPL-2.0 + +// Generated by scripts/atomic/gen-rust-atomic-helpers.sh +// DO NOT MODIFY THIS FILE DIRECTLY + +/* + * This file provides helpers for the various atomic functions for Rust. + */ +#ifndef _RUST_ATOMIC_API_H +#define _RUST_ATOMIC_API_H + +#include + +// TODO: Remove this after LTO helper support is added. +#define __rust_helper + +__rust_helper int +rust_helper_atomic_read(const atomic_t *v) +{ + return atomic_read(v); +} + +__rust_helper int +rust_helper_atomic_read_acquire(const atomic_t *v) +{ + return atomic_read_acquire(v); +} + +__rust_helper void +rust_helper_atomic_set(atomic_t *v, int i) +{ + atomic_set(v, i); +} + +__rust_helper void +rust_helper_atomic_set_release(atomic_t *v, int i) +{ + atomic_set_release(v, i); +} + +__rust_helper void +rust_helper_atomic_add(int i, atomic_t *v) +{ + atomic_add(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return(int i, atomic_t *v) +{ + return atomic_add_return(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_acquire(int i, atomic_t *v) +{ + return atomic_add_return_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_release(int i, atomic_t *v) +{ + return atomic_add_return_release(i, v); +} + +__rust_helper int +rust_helper_atomic_add_return_relaxed(int i, atomic_t *v) +{ + return atomic_add_return_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add(int i, atomic_t *v) +{ + return atomic_fetch_add(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_acquire(int i, atomic_t *v) +{ + return atomic_fetch_add_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_release(int i, atomic_t *v) +{ + return atomic_fetch_add_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_add_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_sub(int i, atomic_t *v) +{ + atomic_sub(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return(int i, atomic_t *v) +{ + return atomic_sub_return(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_acquire(int i, atomic_t *v) +{ + return atomic_sub_return_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_release(int i, atomic_t *v) +{ + return atomic_sub_return_release(i, v); +} + +__rust_helper int +rust_helper_atomic_sub_return_relaxed(int i, atomic_t *v) +{ + return atomic_sub_return_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub(int i, atomic_t *v) +{ + return atomic_fetch_sub(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_acquire(int i, atomic_t *v) +{ + return atomic_fetch_sub_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_release(int i, atomic_t *v) +{ + return atomic_fetch_sub_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_sub_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_sub_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_inc(atomic_t *v) +{ + atomic_inc(v); +} + +__rust_helper int +rust_helper_atomic_inc_return(atomic_t *v) +{ + return atomic_inc_return(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_acquire(atomic_t *v) +{ + return atomic_inc_return_acquire(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_release(atomic_t *v) +{ + return atomic_inc_return_release(v); +} + +__rust_helper int +rust_helper_atomic_inc_return_relaxed(atomic_t *v) +{ + return atomic_inc_return_relaxed(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc(atomic_t *v) +{ + return atomic_fetch_inc(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_acquire(atomic_t *v) +{ + return atomic_fetch_inc_acquire(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_release(atomic_t *v) +{ + return atomic_fetch_inc_release(v); +} + +__rust_helper int +rust_helper_atomic_fetch_inc_relaxed(atomic_t *v) +{ + return atomic_fetch_inc_relaxed(v); +} + +__rust_helper void +rust_helper_atomic_dec(atomic_t *v) +{ + atomic_dec(v); +} + +__rust_helper int +rust_helper_atomic_dec_return(atomic_t *v) +{ + return atomic_dec_return(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_acquire(atomic_t *v) +{ + return atomic_dec_return_acquire(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_release(atomic_t *v) +{ + return atomic_dec_return_release(v); +} + +__rust_helper int +rust_helper_atomic_dec_return_relaxed(atomic_t *v) +{ + return atomic_dec_return_relaxed(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec(atomic_t *v) +{ + return atomic_fetch_dec(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_acquire(atomic_t *v) +{ + return atomic_fetch_dec_acquire(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_release(atomic_t *v) +{ + return atomic_fetch_dec_release(v); +} + +__rust_helper int +rust_helper_atomic_fetch_dec_relaxed(atomic_t *v) +{ + return atomic_fetch_dec_relaxed(v); +} + +__rust_helper void +rust_helper_atomic_and(int i, atomic_t *v) +{ + atomic_and(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and(int i, atomic_t *v) +{ + return atomic_fetch_and(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_acquire(int i, atomic_t *v) +{ + return atomic_fetch_and_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_release(int i, atomic_t *v) +{ + return atomic_fetch_and_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_and_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_and_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_andnot(int i, atomic_t *v) +{ + atomic_andnot(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot(int i, atomic_t *v) +{ + return atomic_fetch_andnot(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_acquire(int i, atomic_t *v) +{ + return atomic_fetch_andnot_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_release(int i, atomic_t *v) +{ + return atomic_fetch_andnot_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_andnot_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_andnot_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_or(int i, atomic_t *v) +{ + atomic_or(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or(int i, atomic_t *v) +{ + return atomic_fetch_or(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_acquire(int i, atomic_t *v) +{ + return atomic_fetch_or_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_release(int i, atomic_t *v) +{ + return atomic_fetch_or_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_or_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_or_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic_xor(int i, atomic_t *v) +{ + atomic_xor(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor(int i, atomic_t *v) +{ + return atomic_fetch_xor(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_acquire(int i, atomic_t *v) +{ + return atomic_fetch_xor_acquire(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_release(int i, atomic_t *v) +{ + return atomic_fetch_xor_release(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_xor_relaxed(int i, atomic_t *v) +{ + return atomic_fetch_xor_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_xchg(atomic_t *v, int new) +{ + return atomic_xchg(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_acquire(atomic_t *v, int new) +{ + return atomic_xchg_acquire(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_release(atomic_t *v, int new) +{ + return atomic_xchg_release(v, new); +} + +__rust_helper int +rust_helper_atomic_xchg_relaxed(atomic_t *v, int new) +{ + return atomic_xchg_relaxed(v, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_acquire(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_acquire(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_release(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_release(v, old, new); +} + +__rust_helper int +rust_helper_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) +{ + return atomic_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_acquire(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_release(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) +{ + return atomic_try_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic_sub_and_test(int i, atomic_t *v) +{ + return atomic_sub_and_test(i, v); +} + +__rust_helper bool +rust_helper_atomic_dec_and_test(atomic_t *v) +{ + return atomic_dec_and_test(v); +} + +__rust_helper bool +rust_helper_atomic_inc_and_test(atomic_t *v) +{ + return atomic_inc_and_test(v); +} + +__rust_helper bool +rust_helper_atomic_add_negative(int i, atomic_t *v) +{ + return atomic_add_negative(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_acquire(int i, atomic_t *v) +{ + return atomic_add_negative_acquire(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_release(int i, atomic_t *v) +{ + return atomic_add_negative_release(i, v); +} + +__rust_helper bool +rust_helper_atomic_add_negative_relaxed(int i, atomic_t *v) +{ + return atomic_add_negative_relaxed(i, v); +} + +__rust_helper int +rust_helper_atomic_fetch_add_unless(atomic_t *v, int a, int u) +{ + return atomic_fetch_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic_add_unless(atomic_t *v, int a, int u) +{ + return atomic_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic_inc_not_zero(atomic_t *v) +{ + return atomic_inc_not_zero(v); +} + +__rust_helper bool +rust_helper_atomic_inc_unless_negative(atomic_t *v) +{ + return atomic_inc_unless_negative(v); +} + +__rust_helper bool +rust_helper_atomic_dec_unless_positive(atomic_t *v) +{ + return atomic_dec_unless_positive(v); +} + +__rust_helper int +rust_helper_atomic_dec_if_positive(atomic_t *v) +{ + return atomic_dec_if_positive(v); +} + +__rust_helper s64 +rust_helper_atomic64_read(const atomic64_t *v) +{ + return atomic64_read(v); +} + +__rust_helper s64 +rust_helper_atomic64_read_acquire(const atomic64_t *v) +{ + return atomic64_read_acquire(v); +} + +__rust_helper void +rust_helper_atomic64_set(atomic64_t *v, s64 i) +{ + atomic64_set(v, i); +} + +__rust_helper void +rust_helper_atomic64_set_release(atomic64_t *v, s64 i) +{ + atomic64_set_release(v, i); +} + +__rust_helper void +rust_helper_atomic64_add(s64 i, atomic64_t *v) +{ + atomic64_add(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return(s64 i, atomic64_t *v) +{ + return atomic64_add_return(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_acquire(s64 i, atomic64_t *v) +{ + return atomic64_add_return_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_release(s64 i, atomic64_t *v) +{ + return atomic64_add_return_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_add_return_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_add_return_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_add_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_sub(s64 i, atomic64_t *v) +{ + atomic64_sub(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return(s64 i, atomic64_t *v) +{ + return atomic64_sub_return(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_acquire(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_release(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_sub_return_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_sub_return_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_sub_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_inc(atomic64_t *v) +{ + atomic64_inc(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return(atomic64_t *v) +{ + return atomic64_inc_return(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_acquire(atomic64_t *v) +{ + return atomic64_inc_return_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_release(atomic64_t *v) +{ + return atomic64_inc_return_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_inc_return_relaxed(atomic64_t *v) +{ + return atomic64_inc_return_relaxed(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc(atomic64_t *v) +{ + return atomic64_fetch_inc(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_acquire(atomic64_t *v) +{ + return atomic64_fetch_inc_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_release(atomic64_t *v) +{ + return atomic64_fetch_inc_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_inc_relaxed(atomic64_t *v) +{ + return atomic64_fetch_inc_relaxed(v); +} + +__rust_helper void +rust_helper_atomic64_dec(atomic64_t *v) +{ + atomic64_dec(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return(atomic64_t *v) +{ + return atomic64_dec_return(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_acquire(atomic64_t *v) +{ + return atomic64_dec_return_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_release(atomic64_t *v) +{ + return atomic64_dec_return_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_return_relaxed(atomic64_t *v) +{ + return atomic64_dec_return_relaxed(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec(atomic64_t *v) +{ + return atomic64_fetch_dec(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_acquire(atomic64_t *v) +{ + return atomic64_fetch_dec_acquire(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_release(atomic64_t *v) +{ + return atomic64_fetch_dec_release(v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_dec_relaxed(atomic64_t *v) +{ + return atomic64_fetch_dec_relaxed(v); +} + +__rust_helper void +rust_helper_atomic64_and(s64 i, atomic64_t *v) +{ + atomic64_and(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_and_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_andnot(s64 i, atomic64_t *v) +{ + atomic64_andnot(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_andnot_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_or(s64 i, atomic64_t *v) +{ + atomic64_or(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_or_relaxed(i, v); +} + +__rust_helper void +rust_helper_atomic64_xor(s64 i, atomic64_t *v) +{ + atomic64_xor(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_acquire(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_release(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_release(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_fetch_xor_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_xchg(atomic64_t *v, s64 new) +{ + return atomic64_xchg(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_acquire(atomic64_t *v, s64 new) +{ + return atomic64_xchg_acquire(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_release(atomic64_t *v, s64 new) +{ + return atomic64_xchg_release(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_xchg_relaxed(atomic64_t *v, s64 new) +{ + return atomic64_xchg_relaxed(v, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_acquire(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_release(v, old, new); +} + +__rust_helper s64 +rust_helper_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) +{ + return atomic64_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_acquire(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_release(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) +{ + return atomic64_try_cmpxchg_relaxed(v, old, new); +} + +__rust_helper bool +rust_helper_atomic64_sub_and_test(s64 i, atomic64_t *v) +{ + return atomic64_sub_and_test(i, v); +} + +__rust_helper bool +rust_helper_atomic64_dec_and_test(atomic64_t *v) +{ + return atomic64_dec_and_test(v); +} + +__rust_helper bool +rust_helper_atomic64_inc_and_test(atomic64_t *v) +{ + return atomic64_inc_and_test(v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative(s64 i, atomic64_t *v) +{ + return atomic64_add_negative(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_acquire(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_acquire(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_release(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_release(i, v); +} + +__rust_helper bool +rust_helper_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) +{ + return atomic64_add_negative_relaxed(i, v); +} + +__rust_helper s64 +rust_helper_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) +{ + return atomic64_fetch_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic64_add_unless(atomic64_t *v, s64 a, s64 u) +{ + return atomic64_add_unless(v, a, u); +} + +__rust_helper bool +rust_helper_atomic64_inc_not_zero(atomic64_t *v) +{ + return atomic64_inc_not_zero(v); +} + +__rust_helper bool +rust_helper_atomic64_inc_unless_negative(atomic64_t *v) +{ + return atomic64_inc_unless_negative(v); +} + +__rust_helper bool +rust_helper_atomic64_dec_unless_positive(atomic64_t *v) +{ + return atomic64_dec_unless_positive(v); +} + +__rust_helper s64 +rust_helper_atomic64_dec_if_positive(atomic64_t *v) +{ + return atomic64_dec_if_positive(v); +} + +#endif /* _RUST_ATOMIC_API_H */ +// b032d261814b3e119b72dbf7d21447f6731325ee diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c index 20a0c69d5cc7..ab5a3f1be241 100644 --- a/rust/helpers/helpers.c +++ b/rust/helpers/helpers.c @@ -7,6 +7,7 @@ * Sorted alphabetically. */ +#include "atomic.c" #include "blk.c" #include "bug.c" #include "build_assert.c" diff --git a/scripts/atomic/gen-atomics.sh b/scripts/atomic/gen-atomics.sh index 5b98a8307693..02508d0d6fe4 100755 --- a/scripts/atomic/gen-atomics.sh +++ b/scripts/atomic/gen-atomics.sh @@ -11,6 +11,7 @@ cat < ${LINUXDIR}/include/${header} diff --git a/scripts/atomic/gen-rust-atomic-helpers.sh b/scripts/atomic/gen-rust-atomic-helpers.sh new file mode 100755 index 000000000000..72f2e5bde0c6 --- /dev/null +++ b/scripts/atomic/gen-rust-atomic-helpers.sh @@ -0,0 +1,65 @@ +#!/bin/sh +# SPDX-License-Identifier: GPL-2.0 + +ATOMICDIR=$(dirname $0) + +. ${ATOMICDIR}/atomic-tbl.sh + +#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...) +gen_proto_order_variant() +{ + local meta="$1"; shift + local pfx="$1"; shift + local name="$1"; shift + local sfx="$1"; shift + local order="$1"; shift + local atomic="$1"; shift + local int="$1"; shift + + local atomicname="${atomic}_${pfx}${name}${sfx}${order}" + + local ret="$(gen_ret_type "${meta}" "${int}")" + local params="$(gen_params "${int}" "${atomic}" "$@")" + local args="$(gen_args "$@")" + local retstmt="$(gen_ret_stmt "${meta}")" + +cat < + +// TODO: Remove this after LTO helper support is added. +#define __rust_helper + +EOF + +grep '^[a-z]' "$1" | while read name meta args; do + gen_proto "${meta}" "${name}" "atomic" "int" ${args} +done + +grep '^[a-z]' "$1" | while read name meta args; do + gen_proto "${meta}" "${name}" "atomic64" "s64" ${args} +done + +cat < X-Patchwork-Id: 13858733 Received: from mail-qv1-f46.google.com (mail-qv1-f46.google.com [209.85.219.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 048D014AD22; Fri, 1 Nov 2024 06:03:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441039; cv=none; b=iLzWeBa4AG9QgciXrVQGKFTAIQMlxO7OTePsKmPCr9i5RuNxnfdjLRnqHWZKDlcE94TOD/dM3y/WfmMuPdBfTN64A796Urle09rYbSywZqHCToD+UfPYtoT5ScOpBo4Hb3n9Axnn9dJfAMF67aEjDZvs4VpcvGFFiFegSO5pqM8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441039; c=relaxed/simple; bh=ZzwhPTHyy8Qo15kfDWH3ADGWwRyJOZbedcMjmuM59hc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PRM1s8XhqovyFVzuONilMvMxFwH1ll93LfEPC/8gO0QnrhWMdMVHHAjgr1J/xd41LtnsW0wsm1UTm71F1TDgtSsUl6NR4HbEFikZFVXjxCjujSUlDkruFSqPWExqt49cnUrkQRWCDhQ3CJNoY4fCmVa7IZvO6ECt1mf4JeyaltU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=KSx7ut7a; arc=none smtp.client-ip=209.85.219.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="KSx7ut7a" Received: by mail-qv1-f46.google.com with SMTP id 6a1803df08f44-6cbe9914487so9892396d6.1; Thu, 31 Oct 2024 23:03:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441036; x=1731045836; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=ezq0QJ490o/WT6pSZwOZSMkEGdb7Q6G1iLv2f7aPhOU=; b=KSx7ut7aTnCOsDXjeSi+d7K1G0SmoYY5dXLWqw57VHCNkXhaNB/S0H3GlfyEZsEpUV jduWEBhIX3sQZ3iJt/fkmdyye+ojMtEmN1iGpyOmIKdTSnIr5K53ap4GvzuWkRMXVu81 zZk56g68HuoVdNZv5X+3zzdi1r/IyDZ6T/YfEMOZr1NUuOKNiHXDRF0aH/YyGd6nmKgm fp/cfm1rCmHHzeLlUxHhcFZgA0QRdymVMgTogt20D6uz9CcNNMtI+2S+pn1y+JbhN3GB DMPeIS4YYtsER7C+LUm12irFfKNJP995sSnwn4kpuwj0p4v4r5PyVDd1q1TgPGt/2mL6 fzqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441036; x=1731045836; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ezq0QJ490o/WT6pSZwOZSMkEGdb7Q6G1iLv2f7aPhOU=; b=vmUmmaI3aa2sAdiPXyDS+kP+hzRYV64g88J1ksz4qu5ATBl1Hh+2AEYrar21g4SKAR 5qn83a8wNg7Z5kdHvf8zrAXwPiruhv0dTVVeu5c+OaIRAjpNXUsYpOK7G8g5IE1a19iJ zbx585pauttbSZIyTi+/5Pbma60Hr1k2sGmER4tppMW5Xoeh01vGwzezYqj5ETWMPzqb LIt+fnrtrS/2VoNz6wWH7URnCZFNVWHOy2hbcFDvo6Q06V6WzdqA+SIE0i0xtQssPKGM ux/8M4tnZZh1Amocp5xTQ7MOgsJ3ccaGTjETvqcVJ6OEiaCGiKW2FBc+y6NqbewzaUbx jk6Q== X-Forwarded-Encrypted: i=1; AJvYcCUNg/SQsyF2CZ0hXhLzdIsGKZJZJj5J4pPlY8Mp2fRiJjinWlEpAVvXD4naYuniMjP/mqJQXYwTPzL+RiAqeg==@vger.kernel.org, AJvYcCVQrxrJPrcjw6rj24Hs4qUCzKTZxC74VS2qgB4IOYXbhuvCBGfuMSErk4Q05nBpMB+qAUtYgwCpxvj9@vger.kernel.org, AJvYcCWSawWgG8PM7SHo0jnEKKQn23MRgo109aqcQbU5FO6z89wOB3KaqbHje5oBHa/89TYIXgMZ@vger.kernel.org, AJvYcCWiPW+ZFBvOO2ENWzTXSxZs5VI6Nr0Pyh1Bk6vBAgAv+yn9cdXQP6dx8zgMzLus7SSWKby6A0r9LaDB37by@vger.kernel.org X-Gm-Message-State: AOJu0Yx6OwJZlAbWapJa3GNKnjsaE5mTaLhZD7SYf00E27Z0BtLMCMjo T2T14/u3nMaQ5KbpEaErVcEIbkq3BzxisIdO+KxZ3QtRYbhrp0W8sH3FmGnV X-Google-Smtp-Source: AGHT+IFzvD+xJDZFS+i4nEvqrLv+UyhL5sCjCQxklWAwGhucOte2TgUYZwOz7TrAUfxsSvfIz0YQug== X-Received: by 2002:a05:6214:3281:b0:6cb:c8ef:3353 with SMTP id 6a1803df08f44-6d351a93188mr86932206d6.2.1730441035616; Thu, 31 Oct 2024 23:03:55 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6d35415b214sm15753266d6.99.2024.10.31.23.03.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:03:55 -0700 (PDT) Received: from phl-compute-08.internal (phl-compute-08.phl.internal [10.202.2.48]) by mailfauth.phl.internal (Postfix) with ESMTP id CC1DA1200043; Fri, 1 Nov 2024 02:03:54 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-08.internal (MEProxy); Fri, 01 Nov 2024 02:03:54 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:03:54 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 02/13] rust: sync: Add basic atomic operation mapping framework Date: Thu, 31 Oct 2024 23:02:25 -0700 Message-ID: <20241101060237.1185533-3-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Preparation for generic atomic implementation. To unify the ipmlementation of a generic method over `i32` and `i64`, the C side atomic methods need to be grouped so that in a generic method, they can be referred as ::, otherwise their parameters and return value are different between `i32` and `i64`, which would require using `transmute()` to unify the type into a `T`. Introduce `AtomicIpml` to represent a basic type in Rust that has the direct mapping to an atomic implementation from C. This trait is sealed, and currently only `i32` and `i64` ipml this. Further, different methods are put into different `*Ops` trait groups, and this is for the future when smaller types like `i8`/`i16` are supported but only with a limited set of API (e.g. only set(), load(), xchg() and cmpxchg(), no add() or sub() etc). While the atomic mod is introduced, documentation is also added for memory models and data races. Also bump my role to the maintainer of ATOMIC INFRASTRUCTURE to reflect my responsiblity on the Rust atomic mod. Signed-off-by: Boqun Feng --- MAINTAINERS | 4 +- rust/kernel/sync.rs | 1 + rust/kernel/sync/atomic.rs | 19 ++++ rust/kernel/sync/atomic/ops.rs | 199 +++++++++++++++++++++++++++++++++ 4 files changed, 222 insertions(+), 1 deletion(-) create mode 100644 rust/kernel/sync/atomic.rs create mode 100644 rust/kernel/sync/atomic/ops.rs diff --git a/MAINTAINERS b/MAINTAINERS index b77f4495dcf4..e09471027a63 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3635,7 +3635,7 @@ F: drivers/input/touchscreen/atmel_mxt_ts.c ATOMIC INFRASTRUCTURE M: Will Deacon M: Peter Zijlstra -R: Boqun Feng +M: Boqun Feng R: Mark Rutland L: linux-kernel@vger.kernel.org S: Maintained @@ -3644,6 +3644,8 @@ F: arch/*/include/asm/atomic*.h F: include/*/atomic*.h F: include/linux/refcount.h F: scripts/atomic/ +F: rust/kernel/sync/atomic.rs +F: rust/kernel/sync/atomic/ ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER M: Bradley Grove diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs index 0ab20975a3b5..66ac3752ca71 100644 --- a/rust/kernel/sync.rs +++ b/rust/kernel/sync.rs @@ -8,6 +8,7 @@ use crate::types::Opaque; mod arc; +pub mod atomic; mod condvar; pub mod lock; mod locked_by; diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs new file mode 100644 index 000000000000..21b87563667e --- /dev/null +++ b/rust/kernel/sync/atomic.rs @@ -0,0 +1,19 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Atomic primitives. +//! +//! These primitives have the same semantics as their C counterparts: and the precise definitions of +//! semantics can be found at [`LKMM`]. Note that Linux Kernel Memory (Consistency) Model is the +//! only model for Rust code in kernel, and Rust's own atomics should be avoided. +//! +//! # Data races +//! +//! [`LKMM`] atomics have different rules regarding data races: +//! +//! - A normal read doesn't data-race with an atomic read. +//! - A normal write from C side is treated as an atomic write if +//! CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y. +//! +//! [`LKMM`]: srctree/tools/memory-mode/ + +pub mod ops; diff --git a/rust/kernel/sync/atomic/ops.rs b/rust/kernel/sync/atomic/ops.rs new file mode 100644 index 000000000000..59101a0d0273 --- /dev/null +++ b/rust/kernel/sync/atomic/ops.rs @@ -0,0 +1,199 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Atomic implementations. +//! +//! Provides 1:1 mapping of atomic implementations. + +use crate::bindings::*; +use crate::macros::paste; + +mod private { + /// Sealed trait marker to disable customized impls on atomic implementation traits. + pub trait Sealed {} +} + +// `i32` and `i64` are only supported atomic implementations. +impl private::Sealed for i32 {} +impl private::Sealed for i64 {} + +/// A marker trait for types that ipmlement atomic operations with C side primitives. +/// +/// This trait is sealed, and only types that have directly mapping to the C side atomics should +/// impl this: +/// +/// - `i32` maps to `atomic_t`. +/// - `i64` maps to `atomic64_t`. +pub trait AtomicImpl: Sized + Send + Copy + private::Sealed {} + +// `atomic_t` impl atomic operations on `i32`. +impl AtomicImpl for i32 {} + +// `atomic64_t` impl atomic operations on `i64`. +impl AtomicImpl for i64 {} + +// This macro generates the function signature with given argument list and return type. +macro_rules! declare_atomic_method { + ( + $func:ident($($arg:ident : $arg_type:ty),*) $(-> $ret:ty)? + ) => { + paste!( + #[doc = concat!("Atomic ", stringify!($func))] + #[doc = "# Safety"] + #[doc = "- any pointer passed to the function has to be a valid pointer"] + #[doc = "- Accesses must not cause data races per LKMM:"] + #[doc = " - atomic read racing with normal read, normal write or atomic write is not data race."] + #[doc = " - atomic write racing with normal read or normal write is data-race, unless the"] + #[doc = " normal accesses are done at C side and considered as immune to data"] + #[doc = " races, e.g. CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC."] + unsafe fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)?; + ); + }; + ( + $func:ident [$variant:ident $($rest:ident)*]($($arg_sig:tt)*) $(-> $ret:ty)? + ) => { + paste!( + declare_atomic_method!( + [< $func _ $variant >]($($arg_sig)*) $(-> $ret)? + ); + ); + + declare_atomic_method!( + $func [$($rest)*]($($arg_sig)*) $(-> $ret)? + ); + }; + ( + $func:ident []($($arg_sig:tt)*) $(-> $ret:ty)? + ) => { + declare_atomic_method!( + $func($($arg_sig)*) $(-> $ret)? + ); + } +} + +// This macro generates the function implementation with given argument list and return type, and it +// will replace "call(...)" expression with "$ctype _ $func" to call the real C function. +macro_rules! impl_atomic_method { + ( + ($ctype:ident) $func:ident($($arg:ident: $arg_type:ty),*) $(-> $ret:ty)? { + call($($c_arg:expr),*) + } + ) => { + paste!( + #[inline(always)] + unsafe fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)? { + // SAFETY: Per function safety requirement, all pointers are valid, and accesses + // won't cause data race per LKMM. + unsafe { [< $ctype _ $func >]($($c_arg,)*) } + } + ); + }; + ( + ($ctype:ident) $func:ident[$variant:ident $($rest:ident)*]($($arg_sig:tt)*) $(-> $ret:ty)? { + call($($arg:tt)*) + } + ) => { + paste!( + impl_atomic_method!( + ($ctype) [< $func _ $variant >]($($arg_sig)*) $( -> $ret)? { + call($($arg)*) + } + ); + ); + impl_atomic_method!( + ($ctype) $func [$($rest)*]($($arg_sig)*) $( -> $ret)? { + call($($arg)*) + } + ); + }; + ( + ($ctype:ident) $func:ident[]($($arg_sig:tt)*) $( -> $ret:ty)? { + call($($arg:tt)*) + } + ) => { + impl_atomic_method!( + ($ctype) $func($($arg_sig)*) $(-> $ret)? { + call($($arg)*) + } + ); + } +} + +// Delcares $ops trait with methods and implements the trait for `i32` and `i64`. +macro_rules! declare_and_impl_atomic_methods { + ($ops:ident ($doc:literal) { + $( + $func:ident [$($variant:ident),*]($($arg_sig:tt)*) $( -> $ret:ty)? { + call($($arg:tt)*) + } + )* + }) => { + #[doc = $doc] + pub trait $ops: AtomicImpl { + $( + declare_atomic_method!( + $func[$($variant)*]($($arg_sig)*) $(-> $ret)? + ); + )* + } + + impl $ops for i32 { + $( + impl_atomic_method!( + (atomic) $func[$($variant)*]($($arg_sig)*) $(-> $ret)? { + call($($arg)*) + } + ); + )* + } + + impl $ops for i64 { + $( + impl_atomic_method!( + (atomic64) $func[$($variant)*]($($arg_sig)*) $(-> $ret)? { + call($($arg)*) + } + ); + )* + } + } +} + +declare_and_impl_atomic_methods!( + AtomicHasBasicOps ("Basic atomic operations") { + read[acquire](ptr: *mut Self) -> Self { + call(ptr as *mut _) + } + + set[release](ptr: *mut Self, v: Self) { + call(ptr as *mut _, v) + } + } +); + +declare_and_impl_atomic_methods!( + AtomicHasXchgOps ("Exchange and compare-and-exchange atomic operations") { + xchg[acquire, release, relaxed](ptr: *mut Self, v: Self) -> Self { + call(ptr as *mut _, v) + } + + cmpxchg[acquire, release, relaxed](ptr: *mut Self, old: Self, new: Self) -> Self { + call(ptr as *mut _, old, new) + } + + try_cmpxchg[acquire, release, relaxed](ptr: *mut Self, old: *mut Self, new: Self) -> bool { + call(ptr as *mut _, old, new) + } + } +); + +declare_and_impl_atomic_methods!( + AtomicHasArithmeticOps ("Atomic arithmetic operations") { + add[](ptr: *mut Self, v: Self) { + call(v, ptr as *mut _) + } + + fetch_add[acquire, release, relaxed](ptr: *mut Self, v: Self) -> Self { + call(v, ptr as *mut _) + } + } +); From patchwork Fri Nov 1 06:02:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858734 Received: from mail-qk1-f178.google.com (mail-qk1-f178.google.com [209.85.222.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 478E414C5BD; Fri, 1 Nov 2024 06:03:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441040; cv=none; b=gv3hS7uGXZbzIoU/sVCli1HO9/DfsEke5aWd+O4WcHUj1lU4WLQ9IgETqmHGBEWvlSTEDFDUx1tvr0MU8/foz/CFRfURHxozqkpQtZBAzRoYKpAijh+AwE0S6m8pKcdvzQP69vCSPKj7ji2mApS7DEy95oMhWDe68MT/yO2mwZo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441040; c=relaxed/simple; bh=kYBrr4vz56GoKSXeexgf9jNKkLgOPVYmear0fnBchqo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=utt+OGKZA+EgYzrXZGkPW/aVUrL7kz/MtiBcLwu6CxjRWtV3LAQwRDpP4wA3o7SB/Fk4yDsfb9EcT/gRXEZGp14/eKgXXufKQYHRo3sZKz+CqU7KowWVFX4MAVzaYh7+sizrzmkVm+ZWEIVC4KTwNWF+bxrz1c+I0gnhhnEsYLg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=d0ZnV5Bm; arc=none smtp.client-ip=209.85.222.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="d0ZnV5Bm" Received: by mail-qk1-f178.google.com with SMTP id af79cd13be357-7b13fe8f4d0so112492285a.0; Thu, 31 Oct 2024 23:03:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441037; x=1731045837; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=O88Ws5ttINxBzT//7+/cLza6DlDBrpbiV/WWX/jyzJU=; b=d0ZnV5BmFGw2XdSJ+uA9IbGUbosK1rs+E/lm+vmfjNRmO6jUXHnJqR7wNUIZJhqlY9 3fxWIHxWkiQOh3Wx8eGF5KrkVvl6xbEex5C9y70XdM61R7F3syAb2HkCFjRT7mOVzazN n6NAJNsX84CXRMZCZLZ4Z3neLwSLH6NWRReXd2QNhKq2f6rko/ORW65exZXZTLGgmExQ LrZWxGsF2TEyvtB0fDKTTRZZtjk4X/O1QeG+Zlb/67w4/+LidLGYSa+C6zOJ3NI9jLg0 DMK11i62+tqw+Vp9IQ/5wRZ5YgPRW1NoE/E8V/Xa324m07qicUxDK9pjIuZmFkK89npF Z20w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441037; x=1731045837; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=O88Ws5ttINxBzT//7+/cLza6DlDBrpbiV/WWX/jyzJU=; b=DkCcZCLTER9BH8v79J55L56M0PDAlWlHB4HN7rkO7eO7S3seP+6L4ioNqcIP/Bhg4b B6vPbLsOWN6l3Jbv6o/QTh85KCuRYeu9iLcsVEc/5fc7PgrJN8YMe8NM2TW/Rl3j2Lx5 GR3gHRnBIyW9oHENH+I0Cm8QV5DEuDh0qbr0tbbUKB25V/W0qHgLeaaY3b2NMLc8iEJf 16b2KOsiXr0y10XrMZhpxoJGLj5Yeqh9gK7OQmOg9dq14xveTBNcxKQ018xPGfSjL+X3 zDxlDtWH8U6xX1seEqukZz7cv1dtCHvUBdE/gzjoiJlI6dlPdnXk3QNNWmi+AbUPVRw6 EcTQ== X-Forwarded-Encrypted: i=1; AJvYcCUsAPeVhFtull3SA9F+eK9JWjZgEsTl9VoxunAysANL/cro5C+flEvidgbpKm3aCTbMA9ExEJp+RSAYwn9A@vger.kernel.org, AJvYcCVKp/53xoEWy/dXeSWT1NCIojiz0/+wbHMB/bAOdNXUQ5kJy6aLeGfPObDhM95qLlX82RBI76MkB5IWJ1A2SA==@vger.kernel.org, AJvYcCVYljzLuy6CEr5HHaB4LkcNrcUxBMBwqM3x5DOpKxUKZwq5wcLyNKjeLEt7H6a5QUiVIF6A@vger.kernel.org, AJvYcCXklY5EMZSszvaqydbSZnxu3OwePf83Fa+aq0HpwgkkL57n2p7xbSt+GWF8g9enWV/w8Afg3yiH2FFK@vger.kernel.org X-Gm-Message-State: AOJu0YxqrbCofmPqAyHabH/iPpiKHeQJQMW7F1n8ztJb5T+ryN6xOAuz /n5cCQWU4bVzf5ZR+IuJaL5UuiKSIzQlP24w8dsDU9jMm9SE6Mx7FR2WFznr X-Google-Smtp-Source: AGHT+IGAgrre0PPpyD6rFQZzu/0lRRQgc08ryAvjceyByTWydXwK6lmc5RYWx3Hkf3yaNoN8SfXJSA== X-Received: by 2002:a05:620a:1a89:b0:7b1:55df:4a83 with SMTP id af79cd13be357-7b2f24c51f4mr703061385a.2.1730441037014; Thu, 31 Oct 2024 23:03:57 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7b2f3a6fef2sm141324085a.79.2024.10.31.23.03.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:03:56 -0700 (PDT) Received: from phl-compute-03.internal (phl-compute-03.phl.internal [10.202.2.43]) by mailfauth.phl.internal (Postfix) with ESMTP id 455E01200043; Fri, 1 Nov 2024 02:03:56 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-03.internal (MEProxy); Fri, 01 Nov 2024 02:03:56 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:03:55 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 03/13] rust: sync: atomic: Add ordering annotation types Date: Thu, 31 Oct 2024 23:02:26 -0700 Message-ID: <20241101060237.1185533-4-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Preparation for atomic primitives. Instead of a suffix like _acquire, a method parameter along with the corresponding generic parameter will be used to specify the ordering of an atomic operations. For example, atomic load() can be defined as: impl Atomic { pub fn load(&self, _o: O) -> T { ... } } and acquire users would do: let r = x.load(Acquire); relaxed users: let r = x.load(Relaxed); doing the following: let r = x.load(Release); will cause a compiler error. Compared to suffixes, it's easier to tell what ordering variants an operation has, and it also make it easier to unify the implementation of all ordering variants in one method via generic. The `IS_RELAXED` and `ORDER` associate consts are for generic function to pick up the particular implementation specified by an ordering annotation. Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 3 + rust/kernel/sync/atomic/ordering.rs | 94 +++++++++++++++++++++++++++++ 2 files changed, 97 insertions(+) create mode 100644 rust/kernel/sync/atomic/ordering.rs diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index 21b87563667e..be2e8583595f 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -17,3 +17,6 @@ //! [`LKMM`]: srctree/tools/memory-mode/ pub mod ops; +pub mod ordering; + +pub use ordering::{Acquire, Full, Relaxed, Release}; diff --git a/rust/kernel/sync/atomic/ordering.rs b/rust/kernel/sync/atomic/ordering.rs new file mode 100644 index 000000000000..6cf01cd276c6 --- /dev/null +++ b/rust/kernel/sync/atomic/ordering.rs @@ -0,0 +1,94 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Memory orderings. +//! +//! The semantics of these orderings follows the [`LKMM`] definitions and rules. +//! +//! - [`Acquire`] and [`Release`] are similar to their counterpart in Rust memory model. +//! - [`Full`] means "fully-ordered", that is: +//! - It provides ordering between all the preceding memory accesses and the annotated operation. +//! - It provides ordering between the annotated operation and all the following memory accesses. +//! - It provides ordering between all the preceding memory accesses and all the fllowing memory +//! accesses. +//! - All the orderings are the same strong as a full memory barrier (i.e. `smp_mb()`). +//! - [`Relaxed`] is similar to the counterpart in Rust memory model, except that dependency +//! orderings are also honored in [`LKMM`]. Dependency orderings are described in "DEPENDENCY +//! RELATIONS" in [`LKMM`]'s [`explanation`]. +//! +//! [`LKMM`]: srctree/tools/memory-model/ +//! [`explanation`]: srctree/tools/memory-model/Documentation/explanation.txt + +/// The annotation type for relaxed memory ordering. +pub struct Relaxed; + +/// The annotation type for acquire memory ordering. +pub struct Acquire; + +/// The annotation type for release memory ordering. +pub struct Release; + +/// The annotation type for fully-order memory ordering. +pub struct Full; + +/// The trait bound for operations that only support relaxed ordering. +pub trait RelaxedOnly {} + +impl RelaxedOnly for Relaxed {} + +/// The trait bound for operations that only support acquire or relaxed ordering. +pub trait AcquireOrRelaxed { + /// Describes whether an ordering is relaxed or not. + const IS_RELAXED: bool = false; +} + +impl AcquireOrRelaxed for Acquire {} + +impl AcquireOrRelaxed for Relaxed { + const IS_RELAXED: bool = true; +} + +/// The trait bound for operations that only support release or relaxed ordering. +pub trait ReleaseOrRelaxed { + /// Describes whether an ordering is relaxed or not. + const IS_RELAXED: bool = false; +} + +impl ReleaseOrRelaxed for Release {} + +impl ReleaseOrRelaxed for Relaxed { + const IS_RELAXED: bool = true; +} + +/// Describes the exact memory ordering of an `impl` [`All`]. +pub enum OrderingDesc { + /// Relaxed ordering. + Relaxed, + /// Acquire ordering. + Acquire, + /// Release ordering. + Release, + /// Fully-ordered. + Full, +} + +/// The trait bound for annotating operations that should support all orderings. +pub trait All { + /// Describes the exact memory ordering. + const ORDER: OrderingDesc; +} + +impl All for Relaxed { + const ORDER: OrderingDesc = OrderingDesc::Relaxed; +} + +impl All for Acquire { + const ORDER: OrderingDesc = OrderingDesc::Acquire; +} + +impl All for Release { + const ORDER: OrderingDesc = OrderingDesc::Release; +} + +impl All for Full { + const ORDER: OrderingDesc = OrderingDesc::Full; +} From patchwork Fri Nov 1 06:02:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858735 Received: from mail-qt1-f179.google.com (mail-qt1-f179.google.com [209.85.160.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C441114EC77; Fri, 1 Nov 2024 06:03:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441042; cv=none; b=iWMr804rRaA07T1el9YywWiKLkTeAH3Zph8mEQS2NAAgM2sek/pI8OiB0rdm9zlFr05OzUvIr6NF5EAE/uWDRjH6QvY3SaYvsHEWy9B7TcV41I3ah1XSIFG6xUVI7KQJSifVBfXPW+UD4ymDRc+hL6Nnyb9ASruXUNjjCOfDeNk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441042; c=relaxed/simple; bh=OpSA2EKjuTJFsKfFeDiLJco+R1AXhNhVS1oLsxyzDcE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sv2HJ+qV1bYAfW++0LydFZ21LgWOmFTBQPdhgBT22RVdHcu3akJPGp6uP3qTFDPA/Y1B4e5VU2JpfSbxySdHLW+F70vpHUbhdhPjrmHrk5AR9az21Lz8jIg0BRIlI9NAnEDc7wXTCHlXG/2bwY/6mW5gBN1KhBb+Zhjp8SktGic= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=aOTLd2fM; arc=none smtp.client-ip=209.85.160.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="aOTLd2fM" Received: by mail-qt1-f179.google.com with SMTP id d75a77b69052e-460d2571033so10823811cf.1; Thu, 31 Oct 2024 23:03:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441039; x=1731045839; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=YIkDBqs1Qg0O/PRjCDXbFtqVlNEc14v8sJMt4WTa4Dc=; b=aOTLd2fMCS1/jksgGLI5FaTwGYoNhn05l0CoZgzdBkADRf6buKKcQ4E5qjdWT1HY8S xPo55xCgp2YtMVUsZi7klus5btoHwN7ncCuH2LPwjC3TZfw+ya0OESor8Mu4Ukl/WAwE PNCKjEOIpxcL2YYHPhdkIFMBf5z3ORjeM3SufBY8RJa1YNhxgSOVqEQjBhYmTABuR+2C 2Mq9KbiQpQ/FfmqZumhCOMZTI2I7E+Uo5OTcLg2THpX8/OGMduksGJIpMfuB9hq87GJc 2eQiRjLkrHjKPNndIAtJxxzdWupw62ic+XX/7/Dv6whuSw8g69JyhZpBZq7SSijYiMAV 34sA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441039; x=1731045839; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=YIkDBqs1Qg0O/PRjCDXbFtqVlNEc14v8sJMt4WTa4Dc=; b=TS0/y1rtwoCTNKW6xh2EYV7ioyus88h6csPG9aXYKC10EL8tyeY8a1BdLl0G5zxj5A u5dw9m0jI4z2pZgY4avEpzCXvredoYjRz1Evr6WVl4Fji4t2fgBJcIM3mV1V3DuKT603 VMEWWOPzatsQfVl65HFC3GJa8LE5gqZpK29DHT2aDMXWt54Hwhi7tsxJf/3XxsNhXcgG HhpvmpbwagYkgUVeFrpqbwqzI4h1pdPiWvYjsLOdoJK2Ouz6gJ58hqOH/udx7zEtW2cP kj6AUfdyadGP4mbC/LAhy+GxK1qWk/Atc0U2q6zhAadzJzc0r91RZjiTp9oyxmjNJ7Jt 3LfQ== X-Forwarded-Encrypted: i=1; AJvYcCU46f6c3ehjcYdX1h9P6x9JyUostmlGOTSwtsYkOg91rFHNspt3dQlK05Z3DOZctuETXUrZ@vger.kernel.org, AJvYcCXDNlB460/DfoW3ahloyeK5khCIphes1cNR19LJ5VvOqgwLylwuqYmAUJ69b6tkzy6lhDikiFyGskWTv7+a9A==@vger.kernel.org, AJvYcCXDfc5zTnKPmJwC6cxTe9Ow04tSXfZNxFd4l83d5uuX+sz03YUFzdWdQRnbBG1Xw62qYSiKf39IsjJd@vger.kernel.org, AJvYcCXOEMxDICg0hxVtXtSth7/oWh2y63n+tPo5Pe2tqnWidLz4kiRFrf/gXyYaJZM6ZyKJHg8BZVwtiD2tJ/+C@vger.kernel.org X-Gm-Message-State: AOJu0YwaY8GiFkJkxjNbM1wNjlqly4pzOoQxtOX3xdyjDP5A/pv2hbEb wqRr3g3ovdO2E+d05FWGRbJsQCAWlViylFjuLV6dxD+qxLPEtZyb1YvNjUbS X-Google-Smtp-Source: AGHT+IE9q3RycECIgIUdPyowNi8R6JIJY/RG9vuOW1JdKHsIaeAIDMOhejLwyBnLfIfnU1xmsVsfSQ== X-Received: by 2002:a05:622a:ca:b0:460:8f81:8c9a with SMTP id d75a77b69052e-4613c1e3fb0mr298778671cf.60.1730441038443; Thu, 31 Oct 2024 23:03:58 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-462ad086d81sm15237321cf.17.2024.10.31.23.03.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:03:58 -0700 (PDT) Received: from phl-compute-08.internal (phl-compute-08.phl.internal [10.202.2.48]) by mailfauth.phl.internal (Postfix) with ESMTP id B039A1200043; Fri, 1 Nov 2024 02:03:57 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-08.internal (MEProxy); Fri, 01 Nov 2024 02:03:57 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:03:57 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 04/13] rust: sync: atomic: Add generic atomics Date: Thu, 31 Oct 2024 23:02:27 -0700 Message-ID: <20241101060237.1185533-5-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 To provide using LKMM atomics for Rust code, a generic `Atomic` is added, currently `T` needs to be Send + Copy because these are the straightforward usages and all basic types support this. The trait `AllowAtomic` should be only ipmlemented inside atomic mod until the generic atomic framework is mature enough (unless the ipmlementer is a `#[repr(transparent)]` new type). `AtomicIpml` types are automatically `AllowAtomic`, and so far only basic operations load() and store() are introduced. Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 2 + rust/kernel/sync/atomic/generic.rs | 253 +++++++++++++++++++++++++++++ 2 files changed, 255 insertions(+) create mode 100644 rust/kernel/sync/atomic/generic.rs diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index be2e8583595f..b791abc59b61 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -16,7 +16,9 @@ //! //! [`LKMM`]: srctree/tools/memory-mode/ +pub mod generic; pub mod ops; pub mod ordering; +pub use generic::Atomic; pub use ordering::{Acquire, Full, Relaxed, Release}; diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs new file mode 100644 index 000000000000..204da38e2691 --- /dev/null +++ b/rust/kernel/sync/atomic/generic.rs @@ -0,0 +1,253 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Generic atomic primitives. + +use super::ops::*; +use super::ordering::*; +use crate::types::Opaque; + +/// A generic atomic variable. +/// +/// `T` must impl [`AllowAtomic`], that is, an [`AtomicImpl`] has to be chosen. +/// +/// # Invariants +/// +/// Doing an atomic operation while holding a reference of [`Self`] won't cause a data race, this +/// is guaranteed by the safety requirement of [`Self::from_ptr`] and the extra safety requirement +/// of the usage on pointers returned by [`Self::as_ptr`]. +#[repr(transparent)] +pub struct Atomic(Opaque); + +// SAFETY: `Atomic` is safe to share among execution contexts because all accesses are atomic. +unsafe impl Sync for Atomic {} + +/// Atomics that support basic atomic operations. +/// +/// TODO: Unless the `impl` is a `#[repr(transparet)]` new type of an existing [`AllowAtomic`], the +/// impl block should be only done in atomic mod. And currently only basic integer types can +/// implement this trait in atomic mod. +/// +/// # Safety +/// +/// [`Self`] must have the same size and alignment as [`Self::Repr`]. +pub unsafe trait AllowAtomic: Sized + Send + Copy { + /// The backing atomic implementation type. + type Repr: AtomicImpl; + + /// Converts into a [`Self::Repr`]. + fn into_repr(self) -> Self::Repr; + + /// Converts from a [`Self::Repr`]. + fn from_repr(repr: Self::Repr) -> Self; +} + +// SAFETY: `T::Repr` is `Self` (i.e. `T`), so they have the same size and alignment. +unsafe impl AllowAtomic for T { + type Repr = Self; + + fn into_repr(self) -> Self::Repr { + self + } + + fn from_repr(repr: Self::Repr) -> Self { + repr + } +} + +impl Atomic { + /// Creates a new atomic. + pub const fn new(v: T) -> Self { + Self(Opaque::new(v)) + } + + /// Creates a reference to [`Self`] from a pointer. + /// + /// # Safety + /// + /// - `ptr` has to be a valid pointer. + /// - `ptr` has to be valid for both reads and writes for the whole lifetime `'a`. + /// - For the whole lifetime of '`a`, other accesses to the object cannot cause data races + /// (defined by [`LKMM`]) against atomic operations on the returned reference. + /// + /// [`LKMM`]: srctree/tools/memory-model + /// + /// # Examples + /// + /// Using [`Atomic::from_ptr()`] combined with [`Atomic::load()`] or [`Atomic::store()`] can + /// achieve the same functionality as `READ_ONCE()`/`smp_load_acquire()` or + /// `WRITE_ONCE()`/`smp_store_release()` in C side: + /// + /// ```rust + /// # use kernel::types::Opaque; + /// use kernel::sync::atomic::{Atomic, Relaxed, Release}; + /// + /// // Assume there is a C struct `Foo`. + /// mod cbindings { + /// #[repr(C)] + /// pub(crate) struct foo { pub(crate) a: i32, pub(crate) b: i32 } + /// } + /// + /// let tmp = Opaque::new(cbindings::foo { a: 1, b: 2}); + /// + /// // struct foo *foo_ptr = ..; + /// let foo_ptr = tmp.get(); + /// + /// // SAFETY: `foo_ptr` is a valid pointer, and `.a` is inbound. + /// let foo_a_ptr = unsafe { core::ptr::addr_of_mut!((*foo_ptr).a) }; + /// + /// // a = READ_ONCE(foo_ptr->a); + /// // + /// // SAFETY: `foo_a_ptr` is a valid pointer for read, and all accesses on it is atomic, so no + /// // data race. + /// let a = unsafe { Atomic::from_ptr(foo_a_ptr) }.load(Relaxed); + /// # assert_eq!(a, 1); + /// + /// // smp_store_release(&foo_ptr->a, 2); + /// // + /// // SAFETY: `foo_a_ptr` is a valid pointer for write, and all accesses on it is atomic, so no + /// // data race. + /// unsafe { Atomic::from_ptr(foo_a_ptr) }.store(2, Release); + /// ``` + /// + /// However, this should be only used when communicating with C side or manipulating a C struct. + pub unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a Self + where + T: Sync, + { + // CAST: `T` is transparent to `Atomic`. + // SAFETY: Per function safety requirement, `ptr` is a valid pointer and the object will + // live long enough. It's safe to return a `&Atomic` because function safety requirement + // guarantees other accesses won't cause data races. + unsafe { &*ptr.cast::() } + } + + /// Returns a pointer to the underlying atomic variable. + /// + /// Extra safety requirement on using the return pointer: the operations done via the pointer + /// cannot cause data races defined by [`LKMM`]. + /// + /// [`LKMM`]: srctree/tools/memory-model + pub const fn as_ptr(&self) -> *mut T { + self.0.get() + } + + /// Returns a mutable reference to the underlying atomic variable. + /// + /// This is safe because the mutable reference of the atomic variable guarantees the exclusive + /// access. + pub fn get_mut(&mut self) -> &mut T { + // SAFETY: `self.as_ptr()` is a valid pointer to `T`, and the object has already been + // initialized. `&mut self` guarantees the exclusive access, so it's safe to reborrow + // mutably. + unsafe { &mut *self.as_ptr() } + } +} + +impl Atomic +where + T::Repr: AtomicHasBasicOps, +{ + /// Loads the value from the atomic variable. + /// + /// # Examples + /// + /// Simple usages: + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x = Atomic::new(42i32); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// let x = Atomic::new(42i64); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// ``` + /// + /// Customized new types in [`Atomic`]: + /// + /// ```rust + /// use kernel::sync::atomic::{generic::AllowAtomic, Atomic, Relaxed}; + /// + /// #[derive(Clone, Copy)] + /// #[repr(transparent)] + /// struct NewType(u32); + /// + /// // SAFETY: `NewType` is transparent to `u32`, which has the same size and alignment as + /// // `i32`. + /// unsafe impl AllowAtomic for NewType { + /// type Repr = i32; + /// + /// fn into_repr(self) -> Self::Repr { + /// self.0 as i32 + /// } + /// + /// fn from_repr(repr: Self::Repr) -> Self { + /// NewType(repr as u32) + /// } + /// } + /// + /// let n = Atomic::new(NewType(0)); + /// + /// assert_eq!(0, n.load(Relaxed).0); + /// ``` + #[inline(always)] + pub fn load(&self, _: Ordering) -> T { + let a = self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_read*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer, + // - per the type invariants, the following atomic operation won't cause data races. + // - For extra safety requirement of usage on pointers returned by `self.as_ptr(): + // - atomic operations are used here. + let v = unsafe { + if Ordering::IS_RELAXED { + T::Repr::atomic_read(a) + } else { + T::Repr::atomic_read_acquire(a) + } + }; + + T::from_repr(v) + } + + /// Stores a value to the atomic variable. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x = Atomic::new(42i32); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// x.store(43, Relaxed); + /// + /// assert_eq!(43, x.load(Relaxed)); + /// ``` + /// + #[inline(always)] + pub fn store(&self, v: T, _: Ordering) { + let v = T::into_repr(v); + let a = self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_set*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer, + // - per the type invariants, the following atomic operation won't cause data races. + // - For extra safety requirement of usage on pointers returned by `self.as_ptr(): + // - atomic operations are used here. + unsafe { + if Ordering::IS_RELAXED { + T::Repr::atomic_set(a, v) + } else { + T::Repr::atomic_set_release(a, v) + } + }; + } +} From patchwork Fri Nov 1 06:02:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858736 Received: from mail-qk1-f172.google.com (mail-qk1-f172.google.com [209.85.222.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 544C8153BC1; Fri, 1 Nov 2024 06:04:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441044; cv=none; b=qd5LRgIbgSdkImcb9tER7Pk90RA/YPgcEx2nS3x/2/ILGtdTwoOZ203SOd9NdIKwwFnyTa7Ueh1XDF96mDNvoIeFb9Bry9jmGDqQuiAJwUCmSypXZTVjOidqRdoyhHuybjSctzfamRSXrh5Zg6AyEB+Dl343jmeqoOC7/7a7feM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441044; c=relaxed/simple; bh=AULQsJAX54VanQqjNTQit54J/T5nh/au0w7mRidEmgM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ap9UvG7ipkagX1MKuaysGl99bgsaQ+2pSZsBy9Unv3W1BzT1wCgWGimjgWVzbbbxvdGZEofACFZiVP0eEYEEwCbXdWCMBh0T9/fuxoSO6uWr+8djq9rqOh6PYUL74rQlugoQAML3Gns0kLqldFtAL2sSTSW1Qr4Sb7OFT21I+X8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=b3bka8nE; arc=none smtp.client-ip=209.85.222.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="b3bka8nE" Received: by mail-qk1-f172.google.com with SMTP id af79cd13be357-7b14443a71eso130453085a.1; Thu, 31 Oct 2024 23:04:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441040; x=1731045840; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=ur8oT6TSfzITnezWCsfVdDRwD9KFpZB0nobhPnadT3A=; b=b3bka8nE1nj+6qzXZ0ko2P6rcoIHWJJuwJHtWjlZakC+ndWtoeRdFSsW4C6BqUwINc WXeqnY1rSCKlTD1NyMgzi9pEsog6h5lISjkEKuV4xc6+EueNp6A60H9imCzHTFbFYMxT 7LiGdRYdcKGLpMKEGgekylWQBp3sFq/gw1ASXm8EnZiJOhrMRKJ08jPkpTOTwQhwPtOu VdlBoDBkcRwT6peiaU3ElPrceSs7i94oPPNLQnH3bA2Y8wmP4XR39PqEatx4E+hcfiy2 vwBva4kpBKfrySC1y6iYpFTjGDCW9ZpS6tioB0a1kuLarEARuNlcLVboHIrU3vDpD7my z5MA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441040; x=1731045840; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ur8oT6TSfzITnezWCsfVdDRwD9KFpZB0nobhPnadT3A=; b=aqM9BqS6eXHC7yP4fr6urid05KwIOR8fNWcyvZnRrUb9d9uGDUiZNJqpgRvOBb+0+1 6kmlbzSGqpm0vBL5QthMT22LbncqsopDezh3C2YiikUL98CuWJZcj2PpLGq8ngaDLfdd gqjsGiE9wiDg268dhb2n5LTFLBPT7AcHsckGaXKW2uPsgGOq49pua8rfHQ9NtgHU1l+G C6Og9JnSvvyaHEGc1Y5d63NCeEJiJthUX8y+TH+Tq7QOI3vodtBLwKMI00PJUY1EhlVW cE5g0dkFmdCjGG0PCliJKjFpar6nFypGpM7CwbO+Lwzzu+0Du699inKhaagH6g0zC4k6 xvZg== X-Forwarded-Encrypted: i=1; AJvYcCUQoM9pIE7Bd5D6rrx2wCF8Wpf4jgYqLoxgDW6+RL5QNKyXYGdD63UrfSSUTYOiY2wrRGgfuRP6pb9toopodg==@vger.kernel.org, AJvYcCUdwCxTknogsIAuk7bVBRXvKOi118+6Aoeh1QMFY29rJ4l9QOx6vN2N1sFUDdgiBxjdYOiiXSYUA4MYSGPP@vger.kernel.org, AJvYcCV53Urzc7rXqSPz7sSWip4r4cqdfLg7pAZgJXFvE+nSTdLqkVvacqWf++GxIcrNGeDycdn8@vger.kernel.org, AJvYcCV9YCf8TrvFqa0jua2MZEWRKo4p1MoTXLDsS3NVdXCM4PlazhqCnLJf34gpgB24xHEKhDIPG+WX093f@vger.kernel.org X-Gm-Message-State: AOJu0YzmU0VbjF5z7DQUQM8lAWik3JkaKB6m5vGjPAdM7/ThgboS2cAl 90q6Sy3K6VLRiMg8O01iPvMELUV1JApw7fM4wHWYmSDciqDkKIKdeqenya2l X-Google-Smtp-Source: AGHT+IGn7Fnf4JT4SwGlNPdIbNgp1gAaBz5TefgXj4tVD56scmLw0kxKdLNV3Y09a04dPX67p+HI/Q== X-Received: by 2002:a05:6214:2d41:b0:6cb:fac2:82d with SMTP id 6a1803df08f44-6d35c165436mr24296566d6.30.1730441040151; Thu, 31 Oct 2024 23:04:00 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6d35415b1d6sm15724766d6.76.2024.10.31.23.03.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:03:59 -0700 (PDT) Received: from phl-compute-12.internal (phl-compute-12.phl.internal [10.202.2.52]) by mailfauth.phl.internal (Postfix) with ESMTP id 27C0D1200043; Fri, 1 Nov 2024 02:03:59 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-12.internal (MEProxy); Fri, 01 Nov 2024 02:03:59 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:03:58 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 05/13] rust: sync: atomic: Add atomic {cmp,}xchg operations Date: Thu, 31 Oct 2024 23:02:28 -0700 Message-ID: <20241101060237.1185533-6-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 xchg() and cmpxchg() are basic operations on atomic. Provide these based on C APIs. Note that cmpxchg() use the similar function signature as compare_exchange() in Rust std: returning a `Result`, `Ok(old)` means the operation succeeds and `Err(old)` means the operation fails. With the compiler optimization and inline helpers, it should provides the same efficient code generation as using atomic_try_cmpxchg() or atomic_cmpxchg() correctly. Except it's not! Because of commit 44fe84459faf ("locking/atomic: Fix atomic_try_cmpxchg() semantics"), the atomic_try_cmpxchg() on x86 has a branch even if the caller doesn't care about the success of cmpxchg and only wants to use the old value. For example, for code like: // Uses the latest value regardlessly, same as atomic_cmpxchg() in C. let latest = x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old); It will still generate code: movl $0x40, %ecx movl $0x34, %eax lock cmpxchgl %ecx, 0x4(%rsp) jne 1f 2: ... 1: movl %eax, %ecx jmp 2b Attempting to write an x86 try_cmpxchg_exclusive() for Rust use only, because the Rust function takes a `&mut` for old pointer, which must be exclusive to the function, therefore it's unsafe to use some shared pointer. But maybe I'm missing something? Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic/generic.rs | 151 +++++++++++++++++++++++++++++ 1 file changed, 151 insertions(+) diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs index 204da38e2691..bfccc4336c75 100644 --- a/rust/kernel/sync/atomic/generic.rs +++ b/rust/kernel/sync/atomic/generic.rs @@ -251,3 +251,154 @@ pub fn store(&self, v: T, _: Ordering) { }; } } + +impl Atomic +where + T::Repr: AtomicHasXchgOps, +{ + /// Atomic exchange. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed}; + /// + /// let x = Atomic::new(42); + /// + /// assert_eq!(42, x.xchg(52, Acquire)); + /// assert_eq!(52, x.load(Relaxed)); + /// ``` + #[inline(always)] + pub fn xchg(&self, v: T, _: Ordering) -> T { + let v = T::into_repr(v); + let a = self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_xchg*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer, + // - per the type invariants, the following atomic operation won't cause data races. + // - For extra safety requirement of usage on pointers returned by `self.as_ptr(): + // - atomic operations are used here. + let ret = unsafe { + match Ordering::ORDER { + OrderingDesc::Full => T::Repr::atomic_xchg(a, v), + OrderingDesc::Acquire => T::Repr::atomic_xchg_acquire(a, v), + OrderingDesc::Release => T::Repr::atomic_xchg_release(a, v), + OrderingDesc::Relaxed => T::Repr::atomic_xchg_relaxed(a, v), + } + }; + + T::from_repr(ret) + } + + /// Atomic compare and exchange. + /// + /// Compare: The comparison is done via the byte level comparison between the atomic variables + /// with the `old` value. + /// + /// Ordering: A failed compare and exchange does provide anything, the read part of a failed + /// cmpxchg should be treated as a relaxed read. + /// + /// Returns `Ok(value)` if cmpxchg succeeds, and `value` is guaranteed to be equal to `old`, + /// otherwise returns `Err(value)`, and `value` is the value of the atomic variable when cmpxchg + /// was happening. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Full, Relaxed}; + /// + /// let x = Atomic::new(42); + /// + /// // Checks whether cmpxchg succeeded. + /// let success = x.cmpxchg(52, 64, Relaxed).is_ok(); + /// # assert!(!success); + /// + /// // Checks whether cmpxchg failed. + /// let failure = x.cmpxchg(52, 64, Relaxed).is_err(); + /// # assert!(failure); + /// + /// // Uses the old value if failed, probably re-try cmpxchg. + /// match x.cmpxchg(52, 64, Relaxed) { + /// Ok(_) => { }, + /// Err(old) => { + /// // do something with `old`. + /// # assert_eq!(old, 42); + /// } + /// } + /// + /// // Uses the latest value regardlessly, same as atomic_cmpxchg() in C. + /// let latest = x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old); + /// # assert_eq!(42, latest); + /// assert_eq!(64, x.load(Relaxed)); + /// ``` + #[inline(always)] + pub fn cmpxchg(&self, mut old: T, new: T, o: Ordering) -> Result { + if self.try_cmpxchg(&mut old, new, o) { + Ok(old) + } else { + Err(old) + } + } + + /// Atomic compare and exchange and returns whether the operation succeeds. + /// + /// "Compare" and "Ordering" part are the same as [`Atomic::cmpxchg`]. + /// + /// Returns `true` means the cmpxchg succeeds otherwise returns `false` with `old` updated to + /// the value of the atomic variable when cmpxchg was happening. + #[inline(always)] + fn try_cmpxchg(&self, old: &mut T, new: T, _: Ordering) -> bool { + let old = (old as *mut T).cast::(); + let new = T::into_repr(new); + let a = self.0.get().cast::(); + + // SAFETY: + // - For calling the atomic_try_cmpchg*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer, + // - per the type invariants, the following atomic operation won't cause data races. + // - `old` is a valid pointer to write because it comes from a mutable reference. + // - For extra safety requirement of usage on pointers returned by `self.as_ptr(): + // - atomic operations are used here. + unsafe { + match Ordering::ORDER { + OrderingDesc::Full => T::Repr::atomic_try_cmpxchg(a, old, new), + OrderingDesc::Acquire => T::Repr::atomic_try_cmpxchg_acquire(a, old, new), + OrderingDesc::Release => T::Repr::atomic_try_cmpxchg_release(a, old, new), + OrderingDesc::Relaxed => T::Repr::atomic_try_cmpxchg_relaxed(a, old, new), + } + } + } + + /// Atomic compare and exchange and return the [`Result`]. + /// + /// "Compare" and "Ordering" part are the same as [`Atomic::cmpxchg`]. + /// + /// Returns `Ok(value)` if cmpxchg succeeds, and `value` is guaranteed to be equal to `old`, + /// otherwise returns `Err(value)`, and `value` is the value of the atomic variable when cmpxchg + /// was happening. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed}; + /// + /// let x = Atomic::new(42i32); + /// + /// assert!(x.compare_exchange(52, 64, Acquire).is_err()); + /// assert_eq!(42, x.load(Relaxed)); + /// + /// assert!(x.compare_exchange(42, 64, Acquire).is_ok()); + /// assert_eq!(64, x.load(Relaxed)); + /// ``` + #[inline(always)] + pub fn compare_exchange(&self, mut old: T, new: T, o: Ordering) -> Result { + if self.try_cmpxchg(&mut old, new, o) { + Ok(old) + } else { + Err(old) + } + } +} From patchwork Fri Nov 1 06:02:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858737 Received: from mail-qt1-f181.google.com (mail-qt1-f181.google.com [209.85.160.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A69621547EF; Fri, 1 Nov 2024 06:04:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441045; cv=none; b=mM2be7VGrIhyPrf9wDe+sz150dq7WSFZw9IX/lbkMod4/26y/hOfbr6qJrbqXlFER/OCktkYScvLck8C1LjtFPUeJW/PnqueHMNNLwJkeK0dn9YdCfyA2bkvC/nK3ZpY/iWUnV+QH5LSEiEGS6/gCdulipLW9Z1YK0J2fRz13X8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441045; c=relaxed/simple; bh=Je/hdBK+1dPYF/aKRU71ebJOckhGfeyTHPUHNrcwqw8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=upQhdmDDn6KSGOS1FiOb2o1oHSMtntjfbvVLDA23t5ymqVV0QlOmCCEvI15AL4HZT39AdkvTf15vQH9+Ih8MJxRGgAMTwiFBnsXmpGzRpLuxpt4QkZQfR2OEDFqBQjA4v7wC0qJ4c8LI72sYqxSB/XX1rVpJVoi4SGDJT1rFrEI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=T0+p63b6; arc=none smtp.client-ip=209.85.160.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="T0+p63b6" Received: by mail-qt1-f181.google.com with SMTP id d75a77b69052e-46089a6849bso10528821cf.3; Thu, 31 Oct 2024 23:04:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441041; x=1731045841; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=Ph2DxAhw/4vnxdgt0bOmp06fJpRNyucZTc/mta0yCHM=; b=T0+p63b6xZSztSWV3ApoSwXDDWv8UGKrZV5jOeQQTO9+ABB9yv9kxDjoFKVgZm8ukh 1I6WIW+YPPxVGh0tajTqkkmSD6yEK0Bf507614a72W6YItsYx3DS23LreKUj3E4PQXF4 /fpVlK6xo/7VrpG9rzrrkdULztd/0c+7WLanKCaa5HenSnNUPGDmn0J/YtM+GZqN1F/r TACCyWzZT57IpmE3GREt1/wF7i94dHBfcWeV+Bo409uOhAuPb+dCPtf3O6EfCDfDGFZg vHU969hbNhq63K0/WkcI+9dwxUlw7SOmkTTof/ls0frTW7cyYuCjoL70GSFYjg7g1C0s rVYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441041; x=1731045841; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=Ph2DxAhw/4vnxdgt0bOmp06fJpRNyucZTc/mta0yCHM=; b=orD1l2SiPZzsLcPfHU66TfzY7LV50dVsJ5mBxx8I3Eedxa+Ixp56g+v9bOYN26gK6Z +yDxDfyy1LESSDpq/lZPJuXD7G6xd4vHxGqAMKPDJwRrc3hiIvp2hnY+UGQPWHThs8ez eBIRBnAHYcW8htq8y2kYzQ91S4d5uWIjYAYYO55MplBKV4V9cA6cRqu12cIovUTa4i26 9CRX9XhoXNSV6/Butb9dzSur1TCKlE4eRErE+a6ByqXtpjUC9qFC/LaPGtcxRhbxLah9 wlyOHPIqgcLVZBdwb2f6trxGdcYGdtJ0jfCKbNwWE18He4T2Mvyu/ollJHoLFwkJNF4j +n0w== X-Forwarded-Encrypted: i=1; AJvYcCULXuQE+ONM7hkqj7alpjRpx/vS/kEXfcFRHVFqNdl52kuGjjt/yixSwH6QbVb8e1vNirIt@vger.kernel.org, AJvYcCV0dvXNprmND0IkGMXIVlDwHGr+IyLlUB8xhtQrR/KD/BgE/73sVu3JTpO/feV25ih1deRzq5dnW27keQop@vger.kernel.org, AJvYcCWDlEkzi3NHVY5kGzJLtZKPxqiaOn5sR6WiQWjGGJbjuNiLM5XGRFPO0ITszFIj9aibDldC1ilde3eg@vger.kernel.org, AJvYcCXuB0blRytM4M+l6LBjc1FwDkbjHztux2hcU/0qL/qiuFOkqsRmCtT1HWEAgTsmsosnwIQrvxYaEQgq7SerRg==@vger.kernel.org X-Gm-Message-State: AOJu0Yy6yfXeiKnhMyvQDgP2V6Og5IxWdlMusnnwm/Zxw1h1SknCXH0a WAbPhXFN1Uvk+C00FRmc2Lpvq83QTMdEOeucaH9FhNd7dzN9/LpH X-Google-Smtp-Source: AGHT+IE0gGRmHuI/BBG+Ue/M6VrNGWOTuGyNRJntySSo6qEW7gKRsjYI/OvJ2uQz3SzQffBZ/4zcfQ== X-Received: by 2002:ac8:574d:0:b0:460:3a44:e150 with SMTP id d75a77b69052e-462ab308202mr65528371cf.51.1730441041447; Thu, 31 Oct 2024 23:04:01 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-462ad0c7095sm15365251cf.42.2024.10.31.23.04.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:04:01 -0700 (PDT) Received: from phl-compute-10.internal (phl-compute-10.phl.internal [10.202.2.50]) by mailfauth.phl.internal (Postfix) with ESMTP id A54591200043; Fri, 1 Nov 2024 02:04:00 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-10.internal (MEProxy); Fri, 01 Nov 2024 02:04:00 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:04:00 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 06/13] rust: sync: atomic: Add the framework of arithmetic operations Date: Thu, 31 Oct 2024 23:02:29 -0700 Message-ID: <20241101060237.1185533-7-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 One important set of atomic operations is the arithmetic operations, i.e. add(), sub(), fetch_add(), add_return(), etc. However it may not make senses for all the types that `AllowAtomic` to have arithmetic operations, for example a `Foo(u32)` may not have a reasonable add() or sub(), plus subword types (`u8` and `u16`) currently don't have atomic arithmetic operations even on C side and might not have them in the future in Rust (because they are usually suboptimal on a few architecures). Therefore add a subtrait of `AllowAtomic` describing which types have and can do atomic arithemtic operations. A few things about this `AllowAtomicArithmetic` trait: * It has an associate type `Delta` instead of using `AllowAllowAtomic::Repr` because, a `Bar(u32)` (whose `Repr` is `i32`) may not wants an `add(&self, i32)`, but an `add(&self, u32)`. * `AtomicImpl` types already implement an `AtomicHasArithmeticOps` trait, so add blanket implementation for them. In the future, `i8` and `i16` may impl `AtomicImpl` but not `AtomicHasArithmeticOps` if arithemtic operations are not available. Only add() and fetch_add() are added. The rest will be added in the future. Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic/generic.rs | 102 +++++++++++++++++++++++++++++ 1 file changed, 102 insertions(+) diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs index bfccc4336c75..a75c3e9f4c89 100644 --- a/rust/kernel/sync/atomic/generic.rs +++ b/rust/kernel/sync/atomic/generic.rs @@ -3,6 +3,7 @@ //! Generic atomic primitives. use super::ops::*; +use super::ordering; use super::ordering::*; use crate::types::Opaque; @@ -54,6 +55,23 @@ fn from_repr(repr: Self::Repr) -> Self { } } +/// Atomics that allows arithmetic operations with an integer type. +pub trait AllowAtomicArithmetic: AllowAtomic { + /// The delta types for arithmetic operations. + type Delta; + + /// Converts [`Self::Delta`] into the representation of the atomic type. + fn delta_into_repr(d: Self::Delta) -> Self::Repr; +} + +impl AllowAtomicArithmetic for T { + type Delta = Self; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d + } +} + impl Atomic { /// Creates a new atomic. pub const fn new(v: T) -> Self { @@ -402,3 +420,87 @@ pub fn compare_exchange(&self, mut old: T, new: T, o: Ordering) - } } } + +impl Atomic +where + T::Repr: AtomicHasArithmeticOps, +{ + /// Atomic add. + /// + /// The addition is a wrapping addition. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Relaxed}; + /// + /// let x = Atomic::new(42); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// x.add(12, Relaxed); + /// + /// assert_eq!(54, x.load(Relaxed)); + /// ``` + #[inline(always)] + pub fn add(&self, v: T::Delta, _: Ordering) { + let v = T::delta_into_repr(v); + let a = self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_add() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer, + // - per the type invariants, the following atomic operation won't cause data races. + // - For extra safety requirement of usage on pointers returned by `self.as_ptr(): + // - atomic operations are used here. + unsafe { + T::Repr::atomic_add(a, v); + } + } + + /// Atomic fetch and add. + /// + /// The addition is a wrapping addition. + /// + /// # Examples + /// + /// ```rust + /// use kernel::sync::atomic::{Atomic, Acquire, Full, Relaxed}; + /// + /// let x = Atomic::new(42); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// assert_eq!(54, { x.fetch_add(12, Acquire); x.load(Relaxed) }); + /// + /// let x = Atomic::new(42); + /// + /// assert_eq!(42, x.load(Relaxed)); + /// + /// assert_eq!(54, { x.fetch_add(12, Full); x.load(Relaxed) } ); + /// ``` + #[inline(always)] + pub fn fetch_add(&self, v: T::Delta, _: Ordering) -> T { + let v = T::delta_into_repr(v); + let a = self.as_ptr().cast::(); + + // SAFETY: + // - For calling the atomic_fetch_add*() function: + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`, + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer, + // - per the type invariants, the following atomic operation won't cause data races. + // - For extra safety requirement of usage on pointers returned by `self.as_ptr(): + // - atomic operations are used here. + let ret = unsafe { + match Ordering::ORDER { + ordering::OrderingDesc::Full => T::Repr::atomic_fetch_add(a, v), + ordering::OrderingDesc::Acquire => T::Repr::atomic_fetch_add_acquire(a, v), + ordering::OrderingDesc::Release => T::Repr::atomic_fetch_add_release(a, v), + ordering::OrderingDesc::Relaxed => T::Repr::atomic_fetch_add_relaxed(a, v), + } + }; + + T::from_repr(ret) + } +} From patchwork Fri Nov 1 06:02:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858738 Received: from mail-qt1-f172.google.com (mail-qt1-f172.google.com [209.85.160.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 20B4815539F; Fri, 1 Nov 2024 06:04:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441046; cv=none; b=Mg9OqNfSbxtJeipviTzzZhbPLLZknsqQB2amIUsiZDS2phNpq0IEFFkxhiAN96kpWdyobZBWZ0IjyAEjI1yNiFgNocJvWguoqWOdaLy8G3Pf4OCSntnKKVmvYLXisV/lATR5V5C3sTXqm22AZKyktlSB5R5WwWsVTv/++y0uxt8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441046; c=relaxed/simple; bh=b6SXwnbsu/RlNkPBThbz53fj6R3sDIHRgIrxZgJqXoc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=t6ihZ3lKig0psINHfxfyFwTFdKAdmm9ZRkv4QuFko0INoeTDASuU6Y1fhOguNo7MHHtW9sAS/98XqrxZggNkFS64dKdtjR6ojw6RwdRvOkYXRCXL7/ZtunRULJNgxRA6HOTWTbDoeJp8Aqt4xOymRWJE9u1645uKAUEmZ3T6SgA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ClDLzm5m; arc=none smtp.client-ip=209.85.160.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ClDLzm5m" Received: by mail-qt1-f172.google.com with SMTP id d75a77b69052e-460ad0440ddso9935681cf.3; Thu, 31 Oct 2024 23:04:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441043; x=1731045843; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=6aL27TUodcAdd+uJ8RiXbRTkzdjD0lB6w2v/luzePnk=; b=ClDLzm5mTy6VsZ3rmXQFl5MMJBCpn/oNQuzm57XyvjrYH11pd+7cqfP034u6/dquFM 0RQXWrI5HJXkNGuF6Fz37v/4T9Pt+lmqC4THf0nFxuZqTOfGOEmSXELwaqzwcCl9kW6C gVnURhJ8ZxjABzyCMkQ0u4UnvWHp1hwodvrh/Wd8jJ2jhd7IP0I02wUUG92I96bpSYo8 3Pg4rVbwQq2oqhqx+x8nellX9rIhHrAyIRLIghxHNxO5Iu0qjECe12lxyCiqmhKNtT+j uxqv+M7vtZp45aXCiZ4M7nj7vcxVlGSNMVeJM0bsjPHoIUVwlTTO2RILo1wwYqivxB9s IKhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441043; x=1731045843; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=6aL27TUodcAdd+uJ8RiXbRTkzdjD0lB6w2v/luzePnk=; b=MY5vHGr89ufWMFDhEWaS3ne0Bm52NyLMoBLTZPFcRafYvtwQ0SMtKPUtDZuAmj/Jvg M3ztaazJ4fjTq/wk3erAHs0zKF+caaeSzthqYGrGSRktmr/T+LICysaAUnqDXC5XSGwK XgyySb8PosgFPNj713yFtjXYMsApnyFCchatSQIwj2hnBXo6+wtGRlSF3Pb03Q8ESTfw S+m+Fli1Ws+6lxj0gaQtDYMjreIvqCfuG0PTWvLIy7tC0k31oeBN+ii7ZxLfejsPrhIk snCGZ/cJpGAPiolWnEtLG8OQkImXQQgemN7QrMa8xdeRo4yjTjcc3jZqgTGWAPy3omra aA0Q== X-Forwarded-Encrypted: i=1; AJvYcCU4Kh9uNBOLjHx/HvPe/P0s6zVMa+cuAS3/KNizzwya8C3we1vh4bOBTi2QdFSNpaI+P3yy@vger.kernel.org, AJvYcCVRSXp/20fU+0RIhHB7W2gk+hA9Rd1dtvZgfFAA8uogHebfOUpHF4DRNyMVCobXHDdV5XS8BVy4GmQXvGR2@vger.kernel.org, AJvYcCVcv18t+oY9jAdfx8bO/cnbS3MItgy9Q5fL2QVIY4JiyxzQC3tiv5xSI0n7qvlH13L9ME9AtWAlM0JTwtn6TA==@vger.kernel.org, AJvYcCWc0eKXnDvHliFXA0CL/v3zT8USRQ6PwDm6Pl13tWQ/b0llSUfjxFR/YKXqDhGgz6zwKLyxzy9VYrVD@vger.kernel.org X-Gm-Message-State: AOJu0YzqsEA2ACGz0+acUKBNCeHGdI526j+yjXZCdwZcxwzkal8ooa6x t+4SsNhUpcCe17SuTM46Bj0QinepAI94+Cba7W2Rt7XwpomU/9h389Ql7X5l X-Google-Smtp-Source: AGHT+IFtVHBa0He2xDXOosemd6sDgqTkWeG7CkK1uL078zUpl1V2KL2W2OmR5COAD+rPl9/4/CysTQ== X-Received: by 2002:ac8:5fcc:0:b0:461:57f9:6294 with SMTP id d75a77b69052e-46157f9680dmr214420921cf.38.1730441043012; Thu, 31 Oct 2024 23:04:03 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-462ad0cc16esm15259101cf.51.2024.10.31.23.04.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:04:02 -0700 (PDT) Received: from phl-compute-08.internal (phl-compute-08.phl.internal [10.202.2.48]) by mailfauth.phl.internal (Postfix) with ESMTP id 36BE21200043; Fri, 1 Nov 2024 02:04:02 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-08.internal (MEProxy); Fri, 01 Nov 2024 02:04:02 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedvnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:04:01 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 07/13] rust: sync: atomic: Add Atomic Date: Thu, 31 Oct 2024 23:02:30 -0700 Message-ID: <20241101060237.1185533-8-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add generic atomic support for basic unsigned types that have an `AtomicIpml` with the same size and alignment. Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 80 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 80 insertions(+) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index b791abc59b61..b2e81e22c105 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -22,3 +22,83 @@ pub use generic::Atomic; pub use ordering::{Acquire, Full, Relaxed, Release}; + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Relaxed}; +/// +/// let x = Atomic::new(42u64); +/// +/// assert_eq!(42, x.load(Relaxed)); +/// ``` +// SAFETY: `u64` and `i64` has the same size and alignment. +unsafe impl generic::AllowAtomic for u64 { + type Repr = i64; + + fn into_repr(self) -> Self::Repr { + self as _ + } + + fn from_repr(repr: Self::Repr) -> Self { + repr as _ + } +} + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Full, Relaxed}; +/// +/// let x = Atomic::new(42u64); +/// +/// assert_eq!(42, x.fetch_add(12, Full)); +/// assert_eq!(54, x.load(Relaxed)); +/// +/// x.add(13, Relaxed); +/// +/// assert_eq!(67, x.load(Relaxed)); +/// ``` +impl generic::AllowAtomicArithmetic for u64 { + type Delta = u64; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d as _ + } +} + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Relaxed}; +/// +/// let x = Atomic::new(42u32); +/// +/// assert_eq!(42, x.load(Relaxed)); +/// ``` +// SAFETY: `u32` and `i32` has the same size and alignment. +unsafe impl generic::AllowAtomic for u32 { + type Repr = i32; + + fn into_repr(self) -> Self::Repr { + self as _ + } + + fn from_repr(repr: Self::Repr) -> Self { + repr as _ + } +} + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Full, Relaxed}; +/// +/// let x = Atomic::new(42u32); +/// +/// assert_eq!(42, x.fetch_add(12, Full)); +/// assert_eq!(54, x.load(Relaxed)); +/// +/// x.add(13, Relaxed); +/// +/// assert_eq!(67, x.load(Relaxed)); +/// ``` +impl generic::AllowAtomicArithmetic for u32 { + type Delta = u32; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d as _ + } +} From patchwork Fri Nov 1 06:02:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858739 Received: from mail-qv1-f53.google.com (mail-qv1-f53.google.com [209.85.219.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 86E80156654; Fri, 1 Nov 2024 06:04:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441048; cv=none; b=Po2ncNIEsxdKvbhH9gNh0u3sp/c8oe6lAV7f6tLCrfRZCD7m1i55vtIxg867NjgeLq15+hbrbbEm/BBYAHCQNGJXD9wP+x9sgpf6DhIPNgxf/J8oMuUIGtkY7t/G5aAhoRyWPYjtpnC3P3B7RiM8qyUB64zw3hTbpr6m68T55cg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441048; c=relaxed/simple; bh=p9btVVa2+xSWu4bOPpbx4eV63bulX64vxQ6AaKLpk8E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Gjn4hrvVN73HpVn1c3pwPOG1/Ka/B3FFGBEPmRR2Xe3tI1ZKUWurSCnbYMfRZlCuEw/C9qfd9/lsb7cYDBleos8lsgMxqP/oWHjGXoS43lZYBk1tdk/+W5VYMVSn1634fgEu51LniYm2pCKCjQqF5LEIkeHJlmnUjlQisxgs7oA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=k95ZqVoi; arc=none smtp.client-ip=209.85.219.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="k95ZqVoi" Received: by mail-qv1-f53.google.com with SMTP id 6a1803df08f44-6cbf0e6414aso9937296d6.1; Thu, 31 Oct 2024 23:04:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441044; x=1731045844; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=A6tG39HqwvxlfRrBPWElR6isULD+PRgdHmRoN4FVkag=; b=k95ZqVoicKzRP5E2V6uvQViTJnl3FjAUmoPn7TfSs8CRc475lARdAyxRY//KznktM/ yEYDUg4DygW54K1xSrUsfE/odwgNxqYSs1rA07BMwfQS7tkfKtpOlM0O9HKk7/DkEcJx zH8GyMx9Xha+ZUcGut6lnuSNQMXDPbYUbq0SUQJMm7vBjvkhDs6PG8pPjPuQ4UIAGa1Q Umd7svNwNX0TToIdyj9J3bz/5KpIEjbvRXNigYfH/aAk466/ktqcybPq8jqod0rkRpBG bVy9PsEIqZy5xBM24dRY7jr38j33K3m2WVIGQ0/qzqI5F4RvhKhogj9lHnW28VUi52cz pdMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441044; x=1731045844; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=A6tG39HqwvxlfRrBPWElR6isULD+PRgdHmRoN4FVkag=; b=MSsulEAKHKdBRgokWZD1bhWRXvGTeWQG8JE5MIyPurbiwW4pFNRcBeDlxC8kvDQn/2 4qxRtSCJ8la7X+Tldmc9gmOrPsxK6Ni4w083oNnb/UGwZIFbMKyvp99jAj+Tmb0rf/o4 SwET7QmmXgCUcSUs+fNMjwO1ZMGPwAoCdwh7/xHRQnGF/fjyOq+99cmvTOyCpPIjblMp QfARR+1dOdfFpJka6axkxY/y9VHI7ewDfzlcRIeEx6htsQ9gUoNG7htdukNpiDcrzEGL eJyrD2uyRy29X3E8B+beFMbOejxULoCcvDMCF8EVchm4jZAIsFh7aNA5a6UmPIhwHn2+ Ci+A== X-Forwarded-Encrypted: i=1; AJvYcCUfJBbjLznwdjSz6rsFF6dJ3Z5DYVoQ/3qgyhQYNcwdTPHGJdmDIP2cARkf0fWZQWSZbGyiFwU7eoPC@vger.kernel.org, AJvYcCW9fwXpIE/s7iZpATfQxQxq6O2KtvOO8djqeLH7WZykuHWUxeopUeW0FtFOfSC99KIGeIW/d8NV1P4fQZGc@vger.kernel.org, AJvYcCWu3P0+MDhNYSvkCgVH4bw+xK3BS40j1OGfDGckZtPKh4EJjpiQyfDahBRT6teQDDMV+B9x@vger.kernel.org, AJvYcCWve4F6exOuWYcXTegdZcw7TdmTLSIFGX3+qilCxHqyFqgowutvBUdzaCj7AZBcK9Jio0GL1VvsC2cHY39DZA==@vger.kernel.org X-Gm-Message-State: AOJu0YyrsBAI5wPl1dpmyesrEATc5OjWM/P7H1RDplUKsyFI2owBJuSz YkWzI42lIwP9VL521v/L46rz2w+MPe3dRaquI5zaKqUtb5Iur93hSzIncg+7 X-Google-Smtp-Source: AGHT+IHxlQ73l+UhRt/AvbKKPa42h8SqSMj0Mq3h3j32da4pj9FbcxaRnUDspUF1ESyOUj61ypPKLQ== X-Received: by 2002:a05:6214:2f81:b0:6cb:c9bc:1a23 with SMTP id 6a1803df08f44-6d1856ea81dmr253747796d6.24.1730441044400; Thu, 31 Oct 2024 23:04:04 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6d353fc49fbsm15821596d6.31.2024.10.31.23.04.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:04:04 -0700 (PDT) Received: from phl-compute-12.internal (phl-compute-12.phl.internal [10.202.2.52]) by mailfauth.phl.internal (Postfix) with ESMTP id A2B0D1200043; Fri, 1 Nov 2024 02:04:03 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-12.internal (MEProxy); Fri, 01 Nov 2024 02:04:03 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedvnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:04:03 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 08/13] rust: sync: atomic: Add Atomic<{usize,isize}> Date: Thu, 31 Oct 2024 23:02:31 -0700 Message-ID: <20241101060237.1185533-9-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add generic atomic support for `usize` and `isize`. Note that instead of mapping directly to `atomic_long_t`, the represention type (`AllowAtomic::Repr`) is selected based on CONFIG_64BIT. This reduces the necessarity of creating `atomic_long_*` helpers, which could save the binary size of kernel if inline helpers are not available. Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 71 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 71 insertions(+) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index b2e81e22c105..4166ad48604f 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -102,3 +102,74 @@ fn delta_into_repr(d: Self::Delta) -> Self::Repr { d as _ } } + +// SAFETY: `usize` has the same size and the alignment as `i64` for 64bit and the same as `i32` for +// 32bit. +unsafe impl generic::AllowAtomic for usize { + #[cfg(CONFIG_64BIT)] + type Repr = i64; + #[cfg(not(CONFIG_64BIT))] + type Repr = i32; + + fn into_repr(self) -> Self::Repr { + self as _ + } + + fn from_repr(repr: Self::Repr) -> Self { + repr as _ + } +} + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Full, Relaxed}; +/// +/// let x = Atomic::new(42usize); +/// +/// assert_eq!(42, x.fetch_add(12, Full)); +/// assert_eq!(54, x.load(Relaxed)); +/// +/// x.add(13, Relaxed); +/// +/// assert_eq!(67, x.load(Relaxed)); +/// ``` +impl generic::AllowAtomicArithmetic for usize { + type Delta = usize; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d as _ + } +} + +// SAFETY: `isize` has the same size and the alignment as `i64` for 64bit and the same as `i32` for +// 32bit. +unsafe impl generic::AllowAtomic for isize { + type Repr = i64; + + fn into_repr(self) -> Self::Repr { + self as _ + } + + fn from_repr(repr: Self::Repr) -> Self { + repr as _ + } +} + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Full, Relaxed}; +/// +/// let x = Atomic::new(42isize); +/// +/// assert_eq!(42, x.fetch_add(12, Full)); +/// assert_eq!(54, x.load(Relaxed)); +/// +/// x.add(13, Relaxed); +/// +/// assert_eq!(67, x.load(Relaxed)); +/// ``` +impl generic::AllowAtomicArithmetic for isize { + type Delta = isize; + + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + d as _ + } +} From patchwork Fri Nov 1 06:02:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858740 Received: from mail-qt1-f170.google.com (mail-qt1-f170.google.com [209.85.160.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1112C15747D; Fri, 1 Nov 2024 06:04:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441049; cv=none; b=HfCJaRJc4jsuIeQfFH7uHp8QI+DZ6ruL75l98H8ppQ7u3mTg0ePxb7ucL8Uc5bEcwnDMMoGP029Mb+tqzZ2L1dt1ymQYproWQDv9S/HUFgwnzkxwx/NYZFDiHA9fkgu++YrIZtRlhxvEn96EbWRi+xeeqG0lmt1A6Yd37N9Tcag= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441049; c=relaxed/simple; bh=dbFh5tZ5JSCZ2km4t7r3WkaZizr4S5DDdwDqMMZTjO0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OUXBJiHZo9B6ps3nBF6caYOIjAB0lcQOVN5J3k1yLsP1ijTkXd8IgmcVk5NCL8hP0xGlJ9Efwn0TnQSuBI0r3GycrXBj7ocmwJfsw2XO8bSdhcSvgc24ldkDizK8WUAVLu7JiYuHbhcyPGRPX8GqrIp1zidOiCIV+GGFy6VV1ec= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ONiENVIL; arc=none smtp.client-ip=209.85.160.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ONiENVIL" Received: by mail-qt1-f170.google.com with SMTP id d75a77b69052e-460dce6fff9so11624721cf.1; Thu, 31 Oct 2024 23:04:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441046; x=1731045846; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=JH7KGrs5tWNaElymjoHhwHajdPBdI1O/huzZLgMbScw=; b=ONiENVIL2LWdgx2RCBwcRc87pmsUyS8swFD5U6mxEXUbdU4D1IOQcVSYWxTTsfkP/T B9p0sfIT/+8rCUPtpv77v7GM+46xfPHZZTppWuc2TDMF0Du7gpvkrFbN/z1TGKImTlDq x0aHf4cYpp5LKLYZvfClykvAnxUhNNDucI4nERkNJs5u58/Haja+/dvK+1hQzAiKbOpm a2ZIDvfgNwtLRT2QKykTC38sdVxUWwmF+jaAp9wLUJsvmms9KJyPqvAmnodr4sgCiIg+ QLbbXorfflaMwG8KBbx9vqMVjHKCwfM2MwIER0O9fP485xtJG0+OgfSKOJ+GWVENIY8Y ARrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441046; x=1731045846; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=JH7KGrs5tWNaElymjoHhwHajdPBdI1O/huzZLgMbScw=; b=XuvHoGNMKUD12MQVCRgcP9Av9jPedC+vJhrq1GRKn/DENrpAkic4qGu/P1pLqmKbKT toV+x8s4rQCJF/O9QrDGghjRwJK/LS0CYFB5Y6RiUOYjLS8cwPOx4fMnzU7y4XGk6GiI /McS7BC4d8qdFE8KYU6eWQv74sxV8LMZiS+PCHQTecgKeIHDlfwQaFOzAJ/MAJfifkdl QUOKQmJi0TfGwcHc9G28ekZOXvFVvcrtxAXxM47o6KZQRY/5j2p9j5DHTcgR647kV1NR SnAnC+08Gg5yLWFHfYjBlYzc8CAkWNylZ3yAX0I/mv+zcY2EroYfyK+lwkOsT1839APd wbIQ== X-Forwarded-Encrypted: i=1; AJvYcCUfOFD056xKHKOgKQbzSHxJeN+JoCau8JzMZMpBsGUgT+XFkwKFZ4hzIa1I05yH3aj2fHeLKLK+3GJW@vger.kernel.org, AJvYcCVF6GlqDKXiDnMd9joRJC6tH1q11tnaWNwm75BJmDPaduC/8m4Yq2icfu1ZmbGBdGYaBgPihcE2LQTSMREj6Q==@vger.kernel.org, AJvYcCVqQ3rM9EN5ypXBj40tMpU/8kGQS1KbfHE3Zbqri3AQoM45RJwtg6EV96Mw5Tk8wRucNYm1@vger.kernel.org, AJvYcCWlZZgMFjI/woN4w4mxCO4TZig6sOJRSgPmqvXy5MvKNmD0C9HS8jvbmeDAfOxcrCs75rBDHXCKIGy15Ksl@vger.kernel.org X-Gm-Message-State: AOJu0YwUiXuO4kUJfXXBDG0r4N5fbRCn2nQcSY6YrZdAqCChg1hakG86 gDRPNTEBZMf5ayl5+TGIF3yIPfvl4aXAqQt0Jl2E0sWPtlfLAgZO X-Google-Smtp-Source: AGHT+IETHN8t913Neds+Flo2tBJr2gQ2uOlumxv7dWk8jfF6Hj1+Hl306SbJkBvaA4RYy6WrLaswjw== X-Received: by 2002:ac8:7dc8:0:b0:460:a9ec:b4fd with SMTP id d75a77b69052e-462b8759878mr19769901cf.42.1730441045924; Thu, 31 Oct 2024 23:04:05 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-462ad1a0f59sm15188921cf.81.2024.10.31.23.04.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:04:05 -0700 (PDT) Received: from phl-compute-09.internal (phl-compute-09.phl.internal [10.202.2.49]) by mailfauth.phl.internal (Postfix) with ESMTP id 1BDFE1200043; Fri, 1 Nov 2024 02:04:05 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-09.internal (MEProxy); Fri, 01 Nov 2024 02:04:05 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:04:04 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 09/13] rust: sync: atomic: Add Atomic<*mut T> Date: Thu, 31 Oct 2024 23:02:32 -0700 Message-ID: <20241101060237.1185533-10-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add atomic support for raw pointer values, similar to `isize` and `usize`, the representation type is selected based on CONFIG_64BIT. `*mut T` is not `Send`, however `Atomic<*mut T>` definitely needs to be a `Sync`, and that's the whole point of atomics: being able to have multiple shared references in different threads so that they can sync with each other. As a result, a pointer value will be transferred from one thread to another via `Atomic<*mut T>`: x.store(p1, Relaxed); let p = x.load(p1, Relaxed); This means a raw pointer value (`*mut T`) needs to be able to transfer across thread boundaries, which is essentially `Send`. To reflect this in the type system, and based on the fact that pointer values can be transferred safely (only using them to dereference is unsafe), as suggested by Alice, extend the `AllowAtomic` trait to include a customized `Send` semantics, that is: `impl AllowAtomic` has to be safe to be transferred across thread boundaries. Suggested-by: Alice Ryhl Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 24 ++++++++++++++++++++++++ rust/kernel/sync/atomic/generic.rs | 16 +++++++++++++--- 2 files changed, 37 insertions(+), 3 deletions(-) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index 4166ad48604f..e62c3cd1d3ca 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -173,3 +173,27 @@ fn delta_into_repr(d: Self::Delta) -> Self::Repr { d as _ } } + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Relaxed}; +/// +/// let x = Atomic::new(core::ptr::null_mut::()); +/// +/// assert!(x.load(Relaxed).is_null()); +/// ``` +// SAFETY: A `*mut T` has the same size and the alignment as `i64` for 64bit and the same as `i32` +// for 32bit. And it's safe to transfer the ownership of a pointer value to another thread. +unsafe impl generic::AllowAtomic for *mut T { + #[cfg(CONFIG_64BIT)] + type Repr = i64; + #[cfg(not(CONFIG_64BIT))] + type Repr = i32; + + fn into_repr(self) -> Self::Repr { + self as _ + } + + fn from_repr(repr: Self::Repr) -> Self { + repr as _ + } +} diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs index a75c3e9f4c89..cff98469ed35 100644 --- a/rust/kernel/sync/atomic/generic.rs +++ b/rust/kernel/sync/atomic/generic.rs @@ -19,6 +19,10 @@ #[repr(transparent)] pub struct Atomic(Opaque); +// SAFETY: `Atomic` is safe to send between execution contexts, because `T` is `AllowAtomic` and +// `AllowAtomic`'s safety requirement guarantees that. +unsafe impl Send for Atomic {} + // SAFETY: `Atomic` is safe to share among execution contexts because all accesses are atomic. unsafe impl Sync for Atomic {} @@ -30,8 +34,13 @@ unsafe impl Sync for Atomic {} /// /// # Safety /// -/// [`Self`] must have the same size and alignment as [`Self::Repr`]. -pub unsafe trait AllowAtomic: Sized + Send + Copy { +/// - [`Self`] must have the same size and alignment as [`Self::Repr`]. +/// - The implementer must guarantee it's safe to transfer ownership from one execution context to +/// another, this means it has to be a [`Send`], but because `*mut T` is not [`Send`] and that's +/// the basic type needs to support atomic operations, so this safety requirement is added to +/// [`AllowAtomic`] trait. This safety requirement is automatically satisfied if the type is a +/// [`Send`]. +pub unsafe trait AllowAtomic: Sized + Copy { /// The backing atomic implementation type. type Repr: AtomicImpl; @@ -42,7 +51,8 @@ pub unsafe trait AllowAtomic: Sized + Send + Copy { fn from_repr(repr: Self::Repr) -> Self; } -// SAFETY: `T::Repr` is `Self` (i.e. `T`), so they have the same size and alignment. +// SAFETY: `T::Repr` is `Self` (i.e. `T`), so they have the same size and alignment. And all +// `AtomicImpl` types are `Send`. unsafe impl AllowAtomic for T { type Repr = Self; From patchwork Fri Nov 1 06:02:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858741 Received: from mail-vk1-f173.google.com (mail-vk1-f173.google.com [209.85.221.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 83807158853; Fri, 1 Nov 2024 06:04:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441051; cv=none; b=fi5oqDwZUqlXUdn5PW8rprjLvi5wjINbHiSYWo3Txv3C11eKu93nBUagdGhv9YgcT3pe94JurZj9IgNXhObo8zSYNkoKHUvIEOlnh+voGkSqkrsCydlMCUWJwUjrhn1LXpXVZEgEkyEbrnwtmH6cL1DQiT4iHXoR0nPrzP9lCis= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441051; c=relaxed/simple; bh=Sfp3RUVnjWsogWYXe36CyUI2OgcuRAOB1019PlfdZhE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KC8Zm/5uw9rIJqgHzFEm1cS3Cpcz4F/8HyQamlTazw0J2GhGTx42+WmI5bLddhsaqYHeKfXqPKDwV7FSiJYr7oap0ktSs0RWRBRxtfmBsQ3SOrusNOIG9dB54pN+q+drtMDavUxGfWCCAf9Uj3js4Vvkrz6qKqswoRUgsVvSj1c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=fSaj2o24; arc=none smtp.client-ip=209.85.221.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="fSaj2o24" Received: by mail-vk1-f173.google.com with SMTP id 71dfb90a1353d-50d35639d0aso581899e0c.0; Thu, 31 Oct 2024 23:04:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441047; x=1731045847; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=HedkcmZuTLLzFJb9CZIBHRUYnKPqPD+H8D+47bw5J50=; b=fSaj2o24kmWEXlfv/iNcGWYMVwz0ooQh6TIyXTNTOe1F9cbbc6orIiLrZRrW20Fs/0 I9RtDRLdEtVKrVZVF7r2bwnYbFKqdAKJVSiSrqEqQ7KoHSH9qLUugBJu6vTSf9GcBj09 RN8kRsH+4f7jxw4Gqge7auDjTjIPzQ/qJah3YKhFR9/V4zDzmoXYsbsMBxslAu1Mz0xt Ypi2uHQF97P9vYFHF8WqcOvy6AWxqjFEnAYdMNwJNdgm/U8nW0seRPrSwzXJJY9IXOrW ydH8aGAbMyQKEsiqZ7kQG4NeCTz/bE4lujdzpk4+atCW5pdPyWFo0PPIvJsJMCKuk+nk 1ybA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441047; x=1731045847; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=HedkcmZuTLLzFJb9CZIBHRUYnKPqPD+H8D+47bw5J50=; b=YcduZ8NPOkbS04Y/T9tO7xziU4wU0Ol9m/V0EG28U00XqdkklCuBBsH4I6ZVjqsZPH 9mq6mFeQtJOA3V/qVEM0ZiqFrCP3gI24vKOylH5mqiYb9cXVHBfvPzghFLwfJqJrqW3U e22VzltJY2R1B5EkxMhsAAyzKObY1SNyqk+kK4ngzOUfC61kws9nFCuiYolrlfwB63mD 3k38uwGXs2pyGQifwodBL/CFF14U+HTdMHWuWIcPJ1sWlXEti2FsfxteCcAOYZMimGvX 60GA9GIe6mRRGJSoxj/qrju5FKUbP/0NyMexqcqLG8FfzkTVXyWtYuhH+Ro7beCiJwmF YBng== X-Forwarded-Encrypted: i=1; AJvYcCVksV6ZoifDtTuqMcCTgza18jwx2YN++4o4Smp/SFVRRY8NTRQwBDc8NbXJCG1vVlyVidgCdIZ5AZgUuQ8J@vger.kernel.org, AJvYcCVoCatJuPwbbzqEHPYCIAp9PaO+EnqE2uPtwsHq/+wjIvQXPPrRj8ICDwJt9nhpT9P3x1GwKqwga4p2MeYziw==@vger.kernel.org, AJvYcCWXrknCzJ/YgoCcBUuEJk4rsaciMNFsHwl+licbKzO6qhuE6LBrkhM6E26bJkI+KQrBMK1D7AViN5Yj@vger.kernel.org, AJvYcCWoe8FDYbcpJansxczgvn7oVxqyIwhncwwAm8+1Qn+atWAetn1Qae64nDYWSAqKQ5HZy64H@vger.kernel.org X-Gm-Message-State: AOJu0YzbtFat4cOxJSKp8bGZPdWdeP/x8JGlVr6nUvzpuOUIdHWM/fCw Xp8xQpiMi7PbfCl8LIYrEctvRJu4sTBoD2/pyTJ0fohe0ytHSSfA X-Google-Smtp-Source: AGHT+IGsUN86CjzRM2+TXGxAuEfbVAWM9u8zsc1k/2ftEz9Ehb3ry6AbNeWlDNmtcM5hFHEp8k22qg== X-Received: by 2002:a05:6122:78a:b0:50a:c7cd:bee4 with SMTP id 71dfb90a1353d-512270cabb8mr2550788e0c.1.1730441047459; Thu, 31 Oct 2024 23:04:07 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6d353ff0302sm15846736d6.74.2024.10.31.23.04.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:04:07 -0700 (PDT) Received: from phl-compute-03.internal (phl-compute-03.phl.internal [10.202.2.43]) by mailfauth.phl.internal (Postfix) with ESMTP id 8C55F1200043; Fri, 1 Nov 2024 02:04:06 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-03.internal (MEProxy); Fri, 01 Nov 2024 02:04:06 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:04:05 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 10/13] rust: sync: atomic: Add arithmetic ops for Atomic<*mut T> Date: Thu, 31 Oct 2024 23:02:33 -0700 Message-ID: <20241101060237.1185533-11-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 (This is more an RFC) Add arithmetic operations support for `Atomic<*mut T>`. Currently the semantics of arithmetic atomic operation is the same as pointer arithmetic, that is, e.g. `Atomic<*mut u64>::add(1)` is adding 8 (`size_of::`) to the pointer value. In Rust std library, there are two sets of pointer arithmetic for `AtomicPtr`: * ptr_add() and ptr_sub(), which is the same as Atomic<*mut T>::add(), pointer arithmetic. * byte_add() and byte_sub(), which use the input as byte offset to change the pointer value, e.g. byte_add(1) means adding 1 to the pointer value. We can either take the approach in the current patch and add byte_add() later on if needed, or start with ptr_add() and byte_add() naming. Signed-off-by: Boqun Feng --- rust/kernel/sync/atomic.rs | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs index e62c3cd1d3ca..cbe5d40d9e36 100644 --- a/rust/kernel/sync/atomic.rs +++ b/rust/kernel/sync/atomic.rs @@ -197,3 +197,32 @@ fn from_repr(repr: Self::Repr) -> Self { repr as _ } } + +/// ```rust +/// use kernel::sync::atomic::{Atomic, Relaxed}; +/// +/// let s: &mut [i32] = &mut [1, 3, 2, 4]; +/// +/// let x = Atomic::new(s.as_mut_ptr()); +/// +/// x.add(1, Relaxed); +/// +/// let ptr = x.fetch_add(1, Relaxed); // points to the 2nd element. +/// let ptr2 = x.load(Relaxed); // points to the 3rd element. +/// +/// // SAFETY: `ptr` and `ptr2` are valid pointers to the 2nd and 3rd elements of `s` with writing +/// // provenance, and no other thread is accessing these elements. +/// unsafe { core::ptr::swap(ptr, ptr2); } +/// +/// assert_eq!(s, &mut [1, 2, 3, 4]); +/// ``` +impl generic::AllowAtomicArithmetic for *mut T { + type Delta = isize; + + /// The behavior of arithmetic operations + fn delta_into_repr(d: Self::Delta) -> Self::Repr { + // Since atomic arithmetic operations are wrapping, so a wrapping_mul() here suffices even + // if overflow may happen. + d.wrapping_mul(core::mem::size_of::() as _) as _ + } +} From patchwork Fri Nov 1 06:02:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858742 Received: from mail-yw1-f169.google.com (mail-yw1-f169.google.com [209.85.128.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C899515ADB4; Fri, 1 Nov 2024 06:04:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441052; cv=none; b=bdhBWNSF8mOb5UDK9fiWHO5mY0qSFvMAg3E3GF63ET2d0cGfyx/pcYMLO7L3XgOycsEAVxbQib/SRZRcZH4BDgcbyKQUqPFcAqN7W+J7+3C//dPjsdC8LKO9oiTdZEfscpz1W54gsc6LwuVUQi5iDv9QUJZvWM2/YXyvuuv4ius= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441052; c=relaxed/simple; bh=MqvhIZAaNo5iiQDL9gtZVIFQYPreKtujEAZe4htAogM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UFwmwM/paOZF4agz8mGVL8cPXY3G6q9YibjV9HNALQeQQuT3b1XfIGFUIhjrfWM9QGIYS8UN8yG8RNsH6l/I6hvz6eLLYz6mKnTs3TNGTFxNOH5Uio7R68wJvaaEyDL1uHmb68nK4P/mvYhAzusSX2DC7VtlUpLhA1d0TabddF4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=E5g+J/ye; arc=none smtp.client-ip=209.85.128.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="E5g+J/ye" Received: by mail-yw1-f169.google.com with SMTP id 00721157ae682-6e2e41bd08bso17929697b3.2; Thu, 31 Oct 2024 23:04:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441049; x=1731045849; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=ONyAMI/uecNup1a5yaiABvXQUHETtGYA3BLm9LR8ufQ=; b=E5g+J/ye0j0vI1zmuZsC3kOLK0lHkzF62oT1v4b9luNpeVuVK1g3MsI5IhSyuLfdQj Xp8yccetak7Ni1+fJbjFk3JAW514qFRWGVmGGZrUkIJw03YZY8v7+nGGN99PcKg8Ic8O MKFh5o15esyXBhp15cWTVp9fjMElrDg5OudIgfteGt3C7BH/S6c+eqdlpc5bGD21ZCOy m3kRoke89QgtyH7+N0EW36wprdVFP1nn3uTn1RAkFoV0UDBMg3GIptVcMD5a5VOKwX8m eJnAlCxVviZPrtlA3N8aAfdA6XbGru5LrzqTe5VEtC47wqTCUeZT3iwi4196RgFUbh0O CRCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441049; x=1731045849; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ONyAMI/uecNup1a5yaiABvXQUHETtGYA3BLm9LR8ufQ=; b=e9CAn47KiXc+XgOqsfmIPoFsn5TrsFXwYlsLSoCuJVVGIOiwIa9Qyw/REHDx0KmnF+ Riywgc6CdiUvrrGYaqphVTcYTSI4gB+WyKPaxuIpq5Y5acxVnAk0mGMcasIcuJndliMY Pa9UlfXKYbDm4aQaX5md6P0hATlVIxmhCjiEmRGeuVotRvsbtcq3/uXTxBf2N2FDnfZr 831c6pTDhhBQTZVsvpo7/1Zot7mDqbICx4WJqH5RbnYri3mdl+NrC554MXIoSNnDD8/f bNHxBwNsX+0rWxO5YTu83VmUx6AFywzEymVDrPSbv54apcwKXIpZ8G7eT2xXs0rWYMZE Gexw== X-Forwarded-Encrypted: i=1; AJvYcCVUW7BBkCJaUnmryPHHuJbbQVeXacoSCKdC9s96oDOvy/WjyI036DqxRejyIPStB1A4POIV+XtflihXjdsM@vger.kernel.org, AJvYcCVenNdQWIZYPR7UTYZpQNPMJKpOUqZd/kqC3s6vxZWI9V3xE3UCM06EOn6QpEH+zLPmXFDidBJs3UhD@vger.kernel.org, AJvYcCWwubDhssNRN3rbzxnYqqLsuWET4uxKGfwJh/pA7SyzX46r4m1mZ43NU+GZnrfZlP21kc53@vger.kernel.org, AJvYcCXWqLZpBVyJgfddJ65Pkl2/H6DDbGijPpv/aCjDytecD8jd0vV8tknN1nOcqB3HX2SpvmzKhsvzu80pe+RKuA==@vger.kernel.org X-Gm-Message-State: AOJu0Yy4A728E9RQeGQtqRCHc+rGLbJZo7GdMXVse73uZB5UjEOciGtB fSLYEKfIjmicKvnHVp8kLPtjyBRkT0PL9qhgHoIYtcJOPX45sl9kB8yTCkAm X-Google-Smtp-Source: AGHT+IHcjrPb/FeOVIxBsa63o86Oc25uPbYhfErjJDN9OPn/lNuk6vqY7WalnP/btuXcfUC55+oe4Q== X-Received: by 2002:a0d:e903:0:b0:6ea:6871:f6a8 with SMTP id 00721157ae682-6ea6871f6f0mr8754657b3.36.1730441048713; Thu, 31 Oct 2024 23:04:08 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6d35415b2c8sm15694736d6.84.2024.10.31.23.04.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:04:08 -0700 (PDT) Received: from phl-compute-06.internal (phl-compute-06.phl.internal [10.202.2.46]) by mailfauth.phl.internal (Postfix) with ESMTP id 0560C1200043; Fri, 1 Nov 2024 02:04:08 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-06.internal (MEProxy); Fri, 01 Nov 2024 02:04:08 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:04:07 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 11/13] rust: sync: Add memory barriers Date: Thu, 31 Oct 2024 23:02:34 -0700 Message-ID: <20241101060237.1185533-12-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Memory barriers are building blocks for concurrent code, hence provide a minimal set of them. The compiler barrier, barrier(), is implemented in inline asm instead of using core::sync::atomic::compiler_fence() because memory models are different: kernel's atomics are implemented in inline asm therefore the compiler barrier should be implemented in inline asm as well. Signed-off-by: Boqun Feng --- rust/helpers/helpers.c | 1 + rust/kernel/sync.rs | 1 + rust/kernel/sync/barrier.rs | 67 +++++++++++++++++++++++++++++++++++++ 3 files changed, 69 insertions(+) create mode 100644 rust/kernel/sync/barrier.rs diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c index ab5a3f1be241..f4a94833b29d 100644 --- a/rust/helpers/helpers.c +++ b/rust/helpers/helpers.c @@ -8,6 +8,7 @@ */ #include "atomic.c" +#include "barrier.c" #include "blk.c" #include "bug.c" #include "build_assert.c" diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs index 66ac3752ca71..0d0b19441ae8 100644 --- a/rust/kernel/sync.rs +++ b/rust/kernel/sync.rs @@ -9,6 +9,7 @@ mod arc; pub mod atomic; +pub mod barrier; mod condvar; pub mod lock; mod locked_by; diff --git a/rust/kernel/sync/barrier.rs b/rust/kernel/sync/barrier.rs new file mode 100644 index 000000000000..277aa09747bf --- /dev/null +++ b/rust/kernel/sync/barrier.rs @@ -0,0 +1,67 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! Memory barriers. +//! +//! These primitives have the same semantics as their C counterparts: and the precise definitions of +//! semantics can be found at [`LKMM`]. +//! +//! [`LKMM`]: srctree/tools/memory-mode/ + +/// A compiler barrier. +/// +/// An explicic compiler barrier function that prevents the compiler from moving the memory +/// accesses either side of it to the other side. +pub fn barrier() { + // By default, Rust inline asms are treated as being able to access any memory or flags, hence + // it suffices as a compiler barrier. + // + // SAFETY: An empty asm block should be safe. + unsafe { + core::arch::asm!(""); + } +} + +/// A full memory barrier. +/// +/// A barrier function that prevents both the compiler and the CPU from moving the memory accesses +/// either side of it to the other side. +pub fn smp_mb() { + if cfg!(CONFIG_SMP) { + // SAFETY: `smp_mb()` is safe to call. + unsafe { + bindings::smp_mb(); + } + } else { + barrier(); + } +} + +/// A write-write memory barrier. +/// +/// A barrier function that prevents both the compiler and the CPU from moving the memory write +/// accesses either side of it to the other side. +pub fn smp_wmb() { + if cfg!(CONFIG_SMP) { + // SAFETY: `smp_wmb()` is safe to call. + unsafe { + bindings::smp_wmb(); + } + } else { + barrier(); + } +} + +/// A read-read memory barrier. +/// +/// A barrier function that prevents both the compiler and the CPU from moving the memory read +/// accesses either side of it to the other side. +pub fn smp_rmb() { + if cfg!(CONFIG_SMP) { + // SAFETY: `smp_rmb()` is safe to call. + unsafe { + bindings::smp_rmb(); + } + } else { + barrier(); + } +} From patchwork Fri Nov 1 06:02:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858743 Received: from mail-qk1-f175.google.com (mail-qk1-f175.google.com [209.85.222.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4BB9B166F16; Fri, 1 Nov 2024 06:04:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441053; cv=none; b=ULJ7l4IsNvFnywJsRGwSG1fbJBRcem2rgT/TQGtaZ+KawwGc5W+P50fJjbXuxqGXI/01/mk9vZRUFz/j6VXuNQtrPhy6I3JlzTY0Ds2FMunGSwkL/vh+UJjQaWkkKn2wC8dllcxlP/SH0LJ0PT4s6GrM2zPCK9noxYjsKCkY9F4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441053; c=relaxed/simple; bh=7KJrgyvO8isbBIeGWfPNx8sxKs/K9PK96xszrTZ36RU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=k+a0uM4PqSM6cHzRBmv0wkaGBeKGDzFPqPFNFKOYXYRykuVhEPLcrMRUtqDGTEFyUx975zghKZXVlno7OIfFnVRtRsQwEx+wg5sPaFqknXHo8EWoqF9lmaaK7GB5SHGQC3pwJb7D5q/cMkQ2ewbOZLE4cCX9pUE+asI4G3NIXD4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=bylxYCx4; arc=none smtp.client-ip=209.85.222.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="bylxYCx4" Received: by mail-qk1-f175.google.com with SMTP id af79cd13be357-7b15467f383so117949185a.3; Thu, 31 Oct 2024 23:04:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441050; x=1731045850; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=Qb/auzzRSYdGnLFU8kbv5LWllGUHO0oSi9IMia1ivTQ=; b=bylxYCx4Xt8Syj+UAFefis0hcbm1j5vy8qmg/S7ZzYhr2YAlSeDEgbDecZsPm9Rh5v H1F/0AIrCiR/+5vNeSBYYz4pkyGBK+sLFW86P4P8/py9HjVy5D/IzpI3zDjUn0Dl1if/ ZyPrTYr4lH+1ThPPdkMnqDNZcgs9P8p0/hDC6T4NHZ1mlXeXLfF8ob1TkLhLYRIk1je9 aKQtXfLvA+w7eNKhT6rR4u8JIjT3QdfFly56MsYohQLLTM79v513THWAHmT/5QlFnC5c o9vY064qcuAkn/X2mk5foqYxfH8cIQ1Nz08QVp4/qsjvq5QklvQeWmGxlMhQbM92O7+Z i8nw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441050; x=1731045850; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=Qb/auzzRSYdGnLFU8kbv5LWllGUHO0oSi9IMia1ivTQ=; b=kp7DEERISCydIhvcLoBFpEd/JK6/7RBQDReHStReg0ZXQhwVwGDokkqqgFOFqt85YL oeSKkFKgXg+tFDRNOMPcl3Etei8xqfA2Irj0AKDrQj8g00kPcziDHSamXquhRPYRHhmp 2sftbOqSUA1UJdRfYF+TM01RTDMPqeSKkBEThZz65tpU/2CNg+qqJzml+HeOCaF5DrhT nOcUIYtnmiMVqxy+GI2T1CXokzuj+s8/DDL8N65ConcY74t8g8HdfNitNf8ArntnqgnG /2b0E/CFiU86tglcHWNPdpMFA1RFXGBK4GbsekZ8/shpVxXHnE+rMqdU7V8dWOg2FHfq fV3w== X-Forwarded-Encrypted: i=1; AJvYcCUKPG7rRlPiD6TXPxPhtvAf1u1PDXy5Di5h5pWpkqWg8rxfnhCa7NfE6Gc8h/XEK4/T+7lCJe149GQwffYSyA==@vger.kernel.org, AJvYcCUMtJfrY50gm/TMy+DfhpWdj7ou+S9dakxmMYDOpXeJR8fgHbif78rdlTrrBsNccNmtoLE4M+q1CcdV@vger.kernel.org, AJvYcCXgQS6PzFjcawY/fBHPP5wA8BnICHvPF0f8qPRjXGb0WOWicSJAZG8wQHQT+4Aq3H9rc1i5@vger.kernel.org, AJvYcCXzwIYF8mxncXq9Aq4Pam+bsfu8vsynT55zw+J29VV9SXuSCDncK9RqtKaQOtTnACzmUBfadrJsP59ER7R/@vger.kernel.org X-Gm-Message-State: AOJu0YwCBWTybEBdFKD5x14Q2T0EnK8I/akYvWNT2yeSs576L+w5USHB Qrg2fKkMGYL759wd8QGLVhnAEI3prEa0M4+B0MfC9dP76ITsx6MLqrvk6Dok X-Google-Smtp-Source: AGHT+IECF9ewTXbsjt9dCxiiHfzqG4m7YRQCPcsoUNeXK7Ldgj9dZvu2Opg1L7q15mIwb9Ncoqb1jw== X-Received: by 2002:a05:6214:419f:b0:6c3:5496:3e06 with SMTP id 6a1803df08f44-6d351a8f363mr68837316d6.10.1730441050203; Thu, 31 Oct 2024 23:04:10 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6d35415a640sm15713376d6.89.2024.10.31.23.04.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:04:09 -0700 (PDT) Received: from phl-compute-04.internal (phl-compute-04.phl.internal [10.202.2.44]) by mailfauth.phl.internal (Postfix) with ESMTP id 7765D1200043; Fri, 1 Nov 2024 02:04:09 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-04.internal (MEProxy); Fri, 01 Nov 2024 02:04:09 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheekpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehruhhsthdqfhhorhdqlhhinhhugiesvhhgvg hrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhinhhugidqkhgvrhhnvghlsehvghgvrhdrkhgvrhhnvg hlrdhorhhgpdhrtghpthhtoheplhhinhhugidqrghrtghhsehvghgvrhdrkhgvrhhnvghl rdhorhhgpdhrtghpthhtoheplhhlvhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtg hpthhtoheplhhkmhhmsehlihhsthhsrdhlihhnuhigrdguvghvpdhrtghpthhtohepohhj vggurgeskhgvrhhnvghlrdhorhhgpdhrtghpthhtoheprghlvgigrdhgrgihnhhorhesgh hmrghilhdrtghomhdprhgtphhtthhopeifvggushhonhgrfhesghhmrghilhdrtghomh X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:04:08 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org, Danilo Krummrich Subject: [RFC v2 12/13] rust: add rcu abstraction Date: Thu, 31 Oct 2024 23:02:35 -0700 Message-ID: <20241101060237.1185533-13-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Wedson Almeida Filho Add a simple abstraction to guard critical code sections with an rcu read lock. Signed-off-by: Wedson Almeida Filho Signed-off-by: Danilo Krummrich --- rust/helpers/helpers.c | 1 + rust/helpers/rcu.c | 13 +++++++++++ rust/kernel/sync.rs | 1 + rust/kernel/sync/rcu.rs | 52 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 67 insertions(+) create mode 100644 rust/helpers/rcu.c create mode 100644 rust/kernel/sync/rcu.rs diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c index f4a94833b29d..65951245879f 100644 --- a/rust/helpers/helpers.c +++ b/rust/helpers/helpers.c @@ -18,6 +18,7 @@ #include "mutex.c" #include "page.c" #include "rbtree.c" +#include "rcu.c" #include "refcount.c" #include "signal.c" #include "slab.c" diff --git a/rust/helpers/rcu.c b/rust/helpers/rcu.c new file mode 100644 index 000000000000..f1cec6583513 --- /dev/null +++ b/rust/helpers/rcu.c @@ -0,0 +1,13 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include + +void rust_helper_rcu_read_lock(void) +{ + rcu_read_lock(); +} + +void rust_helper_rcu_read_unlock(void) +{ + rcu_read_unlock(); +} diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs index 0d0b19441ae8..f5a413e1ce30 100644 --- a/rust/kernel/sync.rs +++ b/rust/kernel/sync.rs @@ -13,6 +13,7 @@ mod condvar; pub mod lock; mod locked_by; +pub mod rcu; pub use arc::{Arc, ArcBorrow, UniqueArc}; pub use condvar::{new_condvar, CondVar, CondVarTimeoutResult}; diff --git a/rust/kernel/sync/rcu.rs b/rust/kernel/sync/rcu.rs new file mode 100644 index 000000000000..5a35495f69a4 --- /dev/null +++ b/rust/kernel/sync/rcu.rs @@ -0,0 +1,52 @@ +// SPDX-License-Identifier: GPL-2.0 + +//! RCU support. +//! +//! C header: [`include/linux/rcupdate.h`](srctree/include/linux/rcupdate.h) + +use crate::bindings; +use core::marker::PhantomData; + +/// Evidence that the RCU read side lock is held on the current thread/CPU. +/// +/// The type is explicitly not `Send` because this property is per-thread/CPU. +/// +/// # Invariants +/// +/// The RCU read side lock is actually held while instances of this guard exist. +pub struct Guard { + _not_send: PhantomData<*mut ()>, +} + +impl Guard { + /// Acquires the RCU read side lock and returns a guard. + pub fn new() -> Self { + // SAFETY: An FFI call with no additional requirements. + unsafe { bindings::rcu_read_lock() }; + // INVARIANT: The RCU read side lock was just acquired above. + Self { + _not_send: PhantomData, + } + } + + /// Explicitly releases the RCU read side lock. + pub fn unlock(self) {} +} + +impl Default for Guard { + fn default() -> Self { + Self::new() + } +} + +impl Drop for Guard { + fn drop(&mut self) { + // SAFETY: By the type invariants, the rcu read side is locked, so it is ok to unlock it. + unsafe { bindings::rcu_read_unlock() }; + } +} + +/// Acquires the RCU read side lock. +pub fn read_lock() -> Guard { + Guard::new() +} From patchwork Fri Nov 1 06:02:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13858744 Received: from mail-qv1-f52.google.com (mail-qv1-f52.google.com [209.85.219.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 818A814B97E; Fri, 1 Nov 2024 06:04:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441056; cv=none; b=YZrr0Wo8UhJ1hPRSeo9XSSfcLVU6/eYJxhImtAt5NhMM/lGXOw4Nx0LMxiYDs9PPpsHvpySUlnGwBUQDC32fqHQdf7ESIc/6Inm9yLAU0U7C1MmSgkGiq8w3oP+NkJw7/XfD57LGBBhPaB152Flh9qQxieyys5TVMreeGT26VIc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730441056; c=relaxed/simple; bh=9R5iwD1QnCK6X+FpMDpeWbrsJJEhKwil0Jn1eC/0yGw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=o2aKwWsbisElurs9bqHK7xzeOdHZGLJIzBIXm602Uq9gxhPiLhyrnKn4P4TNMK+pfzUrw2OHyGHNhAncxTnDxF2idqRDJYTVsVqDWnvOUlGJK/JHxRCsZnndnLRhh98eNu8T5mCPBvb6OoEBqqxYyfGd3hPDaaBt6AVilxOvDQc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ZLCxqVak; arc=none smtp.client-ip=209.85.219.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ZLCxqVak" Received: by mail-qv1-f52.google.com with SMTP id 6a1803df08f44-6cbe9914487so9893226d6.1; Thu, 31 Oct 2024 23:04:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730441052; x=1731045852; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=uwbnorYz17/tWcZ9fit5jE8cUEZsdnUVkY29L9brcUg=; b=ZLCxqVak9We17tq+e/SiVieliqkz1qoj05AIfsRGVYsNuVChDp/sKqPf17B+fT+sj3 cM88+n3bVcBAz/EyC/dxG3piiAvhwsGjaPmLfJJUTLxbXNFAe6ktqpVdKBzTXYeCoa2+ Se0DFUuM3y7dF5CYF78Mki+JC/7SA+FvrOTwp+yLZcHnb67xubNeevPUlq5JptDt/gWY XBjZs2I/nW9Y8RAQ4iitKVdBvJMTxOlGKlMX9guILUNhe7I72hl7hPdOIkUObehPQ31V TMoTsQhUXN4DLSpKzRtG3Dof2NPb2qgy4XnuT+Z+dOixsKQ1FV4aloxs5nmtuB0/YlnC 63jg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730441052; x=1731045852; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=uwbnorYz17/tWcZ9fit5jE8cUEZsdnUVkY29L9brcUg=; b=nCX6ricjdNQ33oQ3LLEQ9QjBAP5XaleFmwDIBkDxgVNbV1/bxE9Pf1jq8wCNYV/YD4 z0mYZmHtSc2cqCXMJZ5FBKBU12jL3DCsrBB+1stMjBIzVzkgGTXTz1VmxsTaL1AsB/Is 05m8bcKmJMpjr1TtwRnREFxHwpbmXB/BcpkaQhmrl2M3gK0LPrdIgffFvt5y6/7bIN6J VRYq5zYAUcdrMRluLHiBwKD/AstNomrBxYq6kexhyu+ximYjdcJg/GcaTWSJOMXu57BC eNjXKuI8/qVZ/S8OJ6PzRfRmFR7WejVHZha/GW38p04rPUnopPeN5LI2QDRZa9RiD+zi UEGA== X-Forwarded-Encrypted: i=1; AJvYcCUp5MxxRrddLLEAonKX3FyPPGfKMBttw8qDebBOFY426+A6LWnVjrKH8aGfFaAaPPNzPaVFdtPsfSAg@vger.kernel.org, AJvYcCVqhFVpDd7cwz9Ems9ClHZuQ4P7BfKASPvknZb2JPsyDGAla34M1ZY7WIcRMtvxQh8+52b1@vger.kernel.org, AJvYcCXGboQGvXwT7OxVPBa6C8rkaG+wn5dqwGyDxmGrEARwfrWExlzsO9/d01OcfNbzJdYzu2kPbwFJZakbUrNj5g==@vger.kernel.org, AJvYcCXRYSkiEn46MgFeVRCGxiT3rt8tsuFYo83Ug/l0XWlrBvxKX7m4vavQIejR3tYYksPT3bjNsX7zax0TVIPX@vger.kernel.org X-Gm-Message-State: AOJu0YzuGvWJFkK5yCCa1+doLBxsn5YOPkYM8SJKgUcrjVh2BLU8Tgva 4cKqaGYSrHiqByvQUhgIt7lpatULepJsfhvPdv5t3654kHKRar8srgzx3Su1 X-Google-Smtp-Source: AGHT+IGyexyR6AoXScTmpu+cSUhYKvGGsYFfD8Tujr/9kMmwHLGzjdJ0dv6AY+zIEczronoUOfa9LA== X-Received: by 2002:a05:6214:4306:b0:6ce:26d0:c7bd with SMTP id 6a1803df08f44-6d351b1ec85mr77840086d6.40.1730441052225; Thu, 31 Oct 2024 23:04:12 -0700 (PDT) Received: from fauth-a2-smtp.messagingengine.com (fauth-a2-smtp.messagingengine.com. [103.168.172.201]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6d354178f8asm15836056d6.119.2024.10.31.23.04.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2024 23:04:11 -0700 (PDT) Received: from phl-compute-04.internal (phl-compute-04.phl.internal [10.202.2.44]) by mailfauth.phl.internal (Postfix) with ESMTP id EB6831200043; Fri, 1 Nov 2024 02:04:10 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-04.internal (MEProxy); Fri, 01 Nov 2024 02:04:10 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdekkedgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnegoufhushhpvggtthffohhmrghinhculdegledmnecujfgurhep hffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnh hguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhn pedutedvgfetleeiffeihfetgfeiheetueefhedukedvveejuddvheeujeehuefgteenuc ffohhmrghinheptghrrghtvghsrdhiohdpiihulhhiphgthhgrthdrtghomhdpghhithhh uhgsrdgtohhmpdhkrghnghhrvghjohhsrdgtohhmpdhgihhthhhusgdrihhonecuvehluh hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqhhunhdomhgv shhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieegqddujeejkeehhe ehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvgdrnhgrmhgv pdhnsggprhgtphhtthhopeehjedpmhhouggvpehsmhhtphhouhhtpdhrtghpthhtoheprh hushhtqdhfohhrqdhlihhnuhigsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthht oheprhgtuhesvhhgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehlihhnuhigqd hkvghrnhgvlhesvhhgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehlihhnuhig qdgrrhgthhesvhhgvghrrdhkvghrnhgvlhdrohhrghdprhgtphhtthhopehllhhvmheslh hishhtshdrlhhinhhugidruggvvhdprhgtphhtthhopehlkhhmmheslhhishhtshdrlhhi nhhugidruggvvhdprhgtphhtthhopehojhgvuggrsehkvghrnhgvlhdrohhrghdprhgtph htthhopegrlhgvgidrghgrhihnohhrsehgmhgrihhlrdgtohhmpdhrtghpthhtohepfigv ughsohhnrghfsehgmhgrihhlrdgtohhm X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 1 Nov 2024 02:04:10 -0400 (EDT) From: Boqun Feng To: rust-for-linux@vger.kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, lkmm@lists.linux.dev Cc: Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Alan Stern , Andrea Parri , Will Deacon , Peter Zijlstra , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , "Paul E. McKenney" , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Nathan Chancellor , Nick Desaulniers , kent.overstreet@gmail.com, Greg Kroah-Hartman , elver@google.com, Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Catalin Marinas , torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, Trevor Gross , dakr@redhat.com, Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Subject: [RFC v2 13/13] rust: sync: rcu: Add RCU protected pointer Date: Thu, 31 Oct 2024 23:02:36 -0700 Message-ID: <20241101060237.1185533-14-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241101060237.1185533-1-boqun.feng@gmail.com> References: <20241101060237.1185533-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 RCU protected pointers are an atomic pointer that can be loaded and dereferenced by mulitple RCU readers, but only one updater/writer can change the value (following a read-copy-update pattern usually). This is useful in the case where data is read-mostly. The rationale of this patch is to provide a proof of concept on how RCU should be exposed to the Rust world, and it also serves as an example for atomic usage. Similar mechanisms like ArcSwap [1] are already widely used. Provide a `Rcu

` type with an atomic pointer implementation. `P` has to be a `ForeignOwnable`, which means the ownership of a object can be represented by a pointer-size value. `Rcu::dereference()` requires a RCU Guard, which means dereferencing is only valid under RCU read lock protection. `Rcu::read_copy_update()` is the operation for updaters, it requries a `Pin<&mut Self>` for exclusive accesses, since RCU updaters are normally exclusive with each other. A lot of RCU functionalities including asynchronously free (call_rcu() and kfree_rcu()) are still missing, and will be the future work. Also, we still need language changes like field projection [2] to provide better ergonomic. Acknowledgment: this work is based on a lot of productive discussions and hard work from others, these are the ones I can remember (sorry if I forgot your contribution): * Wedson started the work on RCU field projection and Benno followed it up and had been working on it as a more general language feature. Also, Gary's field-projection repo [3] has been used as an example for related discussions. * During Kangrejos 2023 [4], Gary, Benno and Alice provided a lot of feedbacks on the talk from Paul and me: "If you want to use RCU in Rust for Linux kernel..." * During a recent discussion among Benno, Paul and me, Benno suggested using `Pin<&mut>` to guarantee the exclusive access on updater operations. Link: https://crates.io/crates/arc-swap [1] Link: https://rust-lang.zulipchat.com/#narrow/channel/213817-t-lang/topic/Field.20Projections/near/474648059 [2] Link: https://github.com/nbdd0121/field-projection [3] Link: https://kangrejos.com/2023 [4] Signed-off-by: Boqun Feng --- rust/kernel/sync/rcu.rs | 269 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 268 insertions(+), 1 deletion(-) diff --git a/rust/kernel/sync/rcu.rs b/rust/kernel/sync/rcu.rs index 5a35495f69a4..8326b2e0986a 100644 --- a/rust/kernel/sync/rcu.rs +++ b/rust/kernel/sync/rcu.rs @@ -5,7 +5,11 @@ //! C header: [`include/linux/rcupdate.h`](srctree/include/linux/rcupdate.h) use crate::bindings; -use core::marker::PhantomData; +use crate::{ + sync::atomic::{Atomic, Relaxed, Release}, + types::ForeignOwnable, +}; +use core::{marker::PhantomData, pin::Pin, ptr::NonNull}; /// Evidence that the RCU read side lock is held on the current thread/CPU. /// @@ -50,3 +54,266 @@ fn drop(&mut self) { pub fn read_lock() -> Guard { Guard::new() } + +/// An RCU protected pointer, the pointed object is protected by RCU. +/// +/// # Invariants +/// +/// Either the pointer is null, or it points to a return value of [`P::into_foreign`] and the atomic +/// variable exclusively owns the pointer. +pub struct Rcu(Atomic<*mut core::ffi::c_void>, PhantomData

); + +/// A pointer that has been unpublished, but hasn't waited for a grace period yet. +/// +/// The pointed object may still have an existing RCU reader. Therefore a grace period is needed to +/// free the object. +/// +/// # Invariants +/// +/// The pointer has to be a return value of [`P::into_foreign`] and [`Self`] exclusively owns the +/// pointer. +pub struct RcuOld(NonNull, PhantomData

); + +impl Drop for RcuOld

{ + fn drop(&mut self) { + // SAFETY: As long as called in a sleepable context, which should be checked by klint, + // `synchronize_rcu()` is safe to call. + unsafe { + bindings::synchronize_rcu(); + } + + // SAFETY: `self.0` is a return value of `P::into_foreign()`, so it's safe to call + // `from_foreign()` on it. Plus, the above `synchronize_rcu()` guarantees no existing + // `ForeignOwnable::borrow()` anymore. + let p: P = unsafe { P::from_foreign(self.0.as_ptr()) }; + drop(p); + } +} + +impl Rcu

{ + /// Creates a new RCU pointer. + pub fn new(p: P) -> Self { + // INVARIANTS: The return value of `p.into_foreign()` is directly stored in the atomic + // variable. + Self(Atomic::new(p.into_foreign().cast_mut()), PhantomData) + } + + /// Dereferences the protected object. + /// + /// Returns `Some(b)`, where `b` is a reference-like borrowed type, if the pointer is not null, + /// otherwise returns `None`. + /// + /// # Examples + /// + /// ```rust + /// # use kernel::alloc::{flags, KBox}; + /// use kernel::sync::rcu::{self, Rcu}; + /// + /// let x = Rcu::new(KBox::new(100i32, flags::GFP_KERNEL)?); + /// + /// let g = rcu::read_lock(); + /// // Read in under RCU read lock protection. + /// let v = x.dereference(&g); + /// + /// assert_eq!(v, Some(&100i32)); + /// + /// # Ok::<(), Error>(()) + /// ``` + /// + /// Note the borrowed access can outlive the reference of the [`Rcu

`], this is because as + /// long as the RCU read lock is held, the pointed object should remain valid. + /// + /// In the following case, the main thread is responsible for the ownership of `shared`, i.e. it + /// will drop it eventually, and a work item can temporarily access the `shared` via `cloned`, + /// but the use of the dereferenced object doesn't depend on `cloned`'s existence. + /// + /// ```rust + /// # use kernel::alloc::{flags, KBox}; + /// # use kernel::workqueue::system; + /// # use kernel::sync::{Arc, atomic::{Atomic, Acquire, Release}}; + /// use kernel::sync::rcu::{self, Rcu}; + /// + /// struct Config { + /// a: i32, + /// b: i32, + /// c: i32, + /// } + /// + /// let config = KBox::new(Config { a: 1, b: 2, c: 3 }, flags::GFP_KERNEL)?; + /// + /// let shared = Arc::new(Rcu::new(config), flags::GFP_KERNEL)?; + /// let cloned = shared.clone(); + /// + /// // Use atomic to simulate a special refcounting. + /// static FLAG: Atomic = Atomic::new(0); + /// + /// system().try_spawn(flags::GFP_KERNEL, move || { + /// let g = rcu::read_lock(); + /// let v = cloned.dereference(&g).unwrap(); + /// drop(cloned); // release reference to `shared`. + /// FLAG.store(1, Release); + /// + /// // but still need to access `v`. + /// assert_eq!(v.a, 1); + /// drop(g); + /// }); + /// + /// // Wait until `cloned` dropped. + /// while FLAG.load(Acquire) == 0 { + /// // SAFETY: Sleep should be safe. + /// unsafe { kernel::bindings::schedule(); } + /// } + /// + /// drop(shared); + /// + /// # Ok::<(), Error>(()) + /// ``` + pub fn dereference<'rcu>(&self, _rcu_guard: &'rcu Guard) -> Option> { + // Ordering: Address dependency pairs with the `store(Release)` in read_copy_update(). + let ptr = self.0.load(Relaxed); + + if !ptr.is_null() { + // SAFETY: + // - Since `ptr` is not null, so it has to be a return value of `P::into_foreign()`. + // - The returned `Borrowed<'rcu>` cannot outlive the RCU Guar, this guarantees the + // return value will only be used under RCU read lock, and the RCU read lock prevents + // the pass of a grace period that the drop of `RcuOld` or `Rcu` is waiting for, + // therefore no `from_foreign()` will be called for `ptr` as long as `Borrowed` exists. + // + // CPU 0 CPU 1 + // ===== ===== + // { `x` is a reference to Rcu> } + // let g = rcu::read_lock(); + // + // if let Some(b) = x.dereference(&g) { + // // drop(g); cannot be done, since `b` is still alive. + // + // if let Some(old) = x.replace(...) { + // // `x` is null now. + // println!("{}", b); + // } + // drop(old): + // synchronize_rcu(); + // drop(g); + // // a grace period passed. + // // No `Borrowed` exists now. + // from_foreign(...); + // } + Some(unsafe { P::borrow(ptr) }) + } else { + None + } + } + + /// Read, copy and update the pointer with new value. + /// + /// Returns `None` if the pointer's old value is null, otherwise returns `Some(old)`, where old + /// is a [`RcuOld`] which can be used to free the old object eventually. + /// + /// The `Pin<&mut Self>` is needed because this function needs the exclusive access to + /// [`Rcu

`], otherwise two `read_copy_update()`s may get the same old object and double free. + /// Using `Pin<&mut Self>` provides the exclusive access that C side requires with the type + /// system checking. + /// + /// Also this has to be `Pin` because a `&mut Self` may allow users to `swap()` safely, that + /// will break the atomicity. A [`Rcu

`] should be structurally pinned in the struct that + /// contains it. + /// + /// Note that `Pin<&mut Self>` cannot assume noalias here because [`Atomic`] is a + /// [`Opaque`] which has the same effect on aliasing rules as [`UnsafePinned`]. + /// + /// [`UnsafePinned`]: https://rust-lang.github.io/rfcs/3467-unsafe-pinned.html + pub fn read_copy_update(self: Pin<&mut Self>, f: F) -> Option> + where + F: FnOnce(Option>) -> Option

, + { + // step 1: READ. + // Ordering: Address dependency pairs with the `store(Release)` in read_copy_update(). + let old_ptr = NonNull::new(self.0.load(Relaxed)); + + let old = old_ptr.map(|nonnull| { + // SAFETY: Per type invariants `old_ptr` has to be a value return by a previous + // `into_foreign()`, and the exclusive reference `self` guarantees that `from_foreign()` + // has not been called. + unsafe { P::borrow(nonnull.as_ptr()) } + }); + + // step 2: COPY, or more generally, initializing `new` based on `old`. + let new = f(old); + + // step 3: UPDATE. + if let Some(new) = new { + let new_ptr = new.into_foreign().cast_mut(); + // Ordering: Pairs with the address dependency in `dereference()` and + // `read_copy_update()`. + // INVARIANTS: `new.into_foreign()` is directly store into the atomic variable. + self.0.store(new_ptr, Release); + } else { + // Ordering: Setting to a null pointer doesn't need to be Release. + // INVARIANTS: The atomic variable is set to be null. + self.0.store(core::ptr::null_mut(), Relaxed); + } + + // INVARIANTS: The exclusive reference guarantess that the ownership of a previous + // `into_foreign()` transferred to the `RcuOld`. + Some(RcuOld(old_ptr?, PhantomData)) + } + + /// Replaces the pointer with new value. + /// + /// Returns `None` if the pointer's old value is null, otherwise returns `Some(old)`, where old + /// is a [`RcuOld`] which can be used to free the old object eventually. + /// + /// # Examples + /// + /// ```rust + /// use core::pin::pin; + /// # use kernel::alloc::{flags, KBox}; + /// use kernel::sync::rcu::{self, Rcu}; + /// + /// let mut x = pin!(Rcu::new(KBox::new(100i32, flags::GFP_KERNEL)?)); + /// let q = KBox::new(101i32, flags::GFP_KERNEL)?; + /// + /// // Read in under RCU read lock protection. + /// let g = rcu::read_lock(); + /// let v = x.dereference(&g); + /// + /// // Replace with a new object. + /// let old = x.as_mut().replace(q); + /// + /// assert!(old.is_some()); + /// + /// // `v` should still read the old value. + /// assert_eq!(v, Some(&100i32)); + /// + /// // New readers should get the new value. + /// assert_eq!(x.dereference(&g), Some(&101i32)); + /// + /// drop(g); + /// + /// // Can free the object outside the read-side critical section. + /// drop(old); + /// # Ok::<(), Error>(()) + /// ``` + pub fn replace(self: Pin<&mut Self>, new: P) -> Option> { + self.read_copy_update(|_| Some(new)) + } +} + +impl Drop for Rcu

{ + fn drop(&mut self) { + let ptr = *self.0.get_mut(); + if !ptr.is_null() { + // SAFETY: As long as called in a sleepable context, which should be checked by klint, + // `synchronize_rcu()` is safe to call. + unsafe { + bindings::synchronize_rcu(); + } + + // SAFETY: `self.0` is a return value of `P::into_foreign()`, so it's safe to call + // `from_foreign()` on it. Plus, the above `synchronize_rcu()` guarantees no existing + // `ForeignOwnable::borrow()` anymore. + drop(unsafe { P::from_foreign(ptr) }); + } + } +}