From patchwork Tue Oct 8 13:50:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathieu Desnoyers X-Patchwork-Id: 13826523 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0780CCEF173 for ; Tue, 8 Oct 2024 13:52:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 23E436B007B; Tue, 8 Oct 2024 09:52:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 17A9C6B0093; Tue, 8 Oct 2024 09:52:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F0D226B0098; Tue, 8 Oct 2024 09:52:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D1B9A6B0093 for ; Tue, 8 Oct 2024 09:52:45 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 922B31A11DB for ; Tue, 8 Oct 2024 13:52:43 +0000 (UTC) X-FDA: 82650575490.01.1DAD651 Received: from smtpout.efficios.com (smtpout.efficios.com [167.114.26.122]) by imf27.hostedemail.com (Postfix) with ESMTP id AE8084001E for ; Tue, 8 Oct 2024 13:52:43 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=efficios.com header.s=smtpout1 header.b=ep37mymW; spf=pass (imf27.hostedemail.com: domain of mathieu.desnoyers@efficios.com designates 167.114.26.122 as permitted sender) smtp.mailfrom=mathieu.desnoyers@efficios.com; dmarc=pass (policy=none) header.from=efficios.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728395428; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Wt+qOE+oY8cedzfD/SRV+LLhAr24vQilCar5MDmLUuw=; b=qAy3uPkLHCfoAPczRz2Ryf5dpZT0y/ZihtWyBRc51xj2HfhqcIjzwsW/ZBAOvWWIcZZ8Jv 66KiX2pUA1AdzsdwhU9aETn8o2P5QS7x+V8saqYUubfyVGXN6ZXf9zuzu66+B6mUvCpZTU dwUqURYEzywEuRtHv/9tDmbh41EcPcM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728395428; a=rsa-sha256; cv=none; b=0ooJTklcKCbA5o8LdJSAUdpTB2srUvSmSxmDDPfubwOnaegSik7pXcT9Uu+wXBnyOWm1O/ ZtrgseD40cLaEAd7Axmhpuz/tdo4XXfxOCN7IuCJP0yoTaRHsvx170Uf96FjFop0Lp9MR6 zwA5VWcaPLY252rfQaqO0KZXtTrpPlk= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=efficios.com header.s=smtpout1 header.b=ep37mymW; spf=pass (imf27.hostedemail.com: domain of mathieu.desnoyers@efficios.com designates 167.114.26.122 as permitted sender) smtp.mailfrom=mathieu.desnoyers@efficios.com; dmarc=pass (policy=none) header.from=efficios.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=efficios.com; s=smtpout1; t=1728395562; bh=CqeorPBfn7xn4x4/jMKXIZM1tORejYC8CC1QTPI+6II=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ep37mymW3evxObDnQSIt1sbxWRwIqCJEJg8zI5qZFOdnwyIF1lEeimU1x2fdW++2p xoZNWi7AQXhawbbn7rEar3t/FN+BclKdleWla6J97d6X7/ubl8160zYpGkGqWbmGkM e5CpAk8ILsjCM8zeRvxrcgxKYXWXMonmvkf8dnpd9/G1PP8t0aamSnOMbOs7NSOCAU tXazdlfBc29XdllDSjZY/FXwkQCVrdXiIeIJepVGQSVYNfdFsWwa7Ey13CEXH/hKYE VRK1DaPvw5XOh2AZdbfLY6MpSc3g/Ww1UYNQv1TZYv7D8clPnxK9IbH2T/ZJLhe/96 6kICSCx5z4QoQ== Received: from thinkos.internal.efficios.com (96-127-217-162.qc.cable.ebox.net [96.127.217.162]) by smtpout.efficios.com (Postfix) with ESMTPSA id 4XNHXB3Lv8zLwW; Tue, 8 Oct 2024 09:52:42 -0400 (EDT) From: Mathieu Desnoyers To: Boqun Feng Cc: linux-kernel@vger.kernel.org, Mathieu Desnoyers , Linus Torvalds , Andrew Morton , Peter Zijlstra , Nicholas Piggin , Michael Ellerman , Greg Kroah-Hartman , Sebastian Andrzej Siewior , "Paul E. McKenney" , Will Deacon , Alan Stern , John Stultz , Neeraj Upadhyay , Frederic Weisbecker , Joel Fernandes , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Lai Jiangshan , Zqiang , Ingo Molnar , Waiman Long , Mark Rutland , Thomas Gleixner , Vlastimil Babka , maged.michael@gmail.com, Mateusz Guzik , Jonas Oberhauser , rcu@vger.kernel.org, linux-mm@kvack.org, lkmm@lists.linux.dev Subject: [RFC PATCH v3 3/4] hazptr: Implement Hazard Pointers Date: Tue, 8 Oct 2024 09:50:33 -0400 Message-Id: <20241008135034.1982519-4-mathieu.desnoyers@efficios.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20241008135034.1982519-1-mathieu.desnoyers@efficios.com> References: <20241008135034.1982519-1-mathieu.desnoyers@efficios.com> MIME-Version: 1.0 X-Stat-Signature: qdg6powyy798ha9nwynuudicfw11xwxq X-Rspamd-Queue-Id: AE8084001E X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1728395563-121929 X-HE-Meta: U2FsdGVkX1+ejzDlgsE9UVZnmW6/CQasReRhMgcJ6OvBSUeeu9hoejPPLyzgv9EFYt6guGt6rU3M10FprKFM5JSNXIzCJ/waI0fSDtQas0WNuw5XMiNi75AsHt1Xo+PykQCRlFxXGCeNtPztH3O7Dzrgm54VYVel4n8iwePHbl5BFdoRvPXK65fDiLpY60KYJcWGePWqs4d/MYcd8/3FhDMtjeBopwabLlng1QioTHzz+YkNqRFPArxZi5WNBDmT4K6biEec7eEP+tQsTDDt4i1FfE+kQEaDsyKjt8CxODcCgrcNxu1zO0JARIuPEaTm4JzTfVVL0K15ME2HeA8wbW6N13Tqygx6pftzJ9EEDrsz7ZAgG6GAtEH5jlMY1kbpbwV7ciC5gUPPg+/3w8NCYsBQW1iWo7X5AAkNktcS1QIPbqF76WVtuaYG9eLNB2AxYEwm0R9IV++eF/BZ9fEgYTODI/7mdEnIDAooKYasS1pVk/0/K2QayY59CwOR7jXCm3ELsqKHzkcLUYMiL5bkqxplRNeXcI4dt/16pkfVBetvd0X7yNBm+8w5wUG1W7cp8tZ0uLH9yDOL4Mpyrqb1MxPp/JRoC6RVrrXAOivc4RJY6UybkGoVFf5lqTYkkcwYxPykuVnU6sQZmSfEMMAnUIAtZc6dh4o/bahmTD1gigWDZz827T/y/4hFAUwrbSNDGA1z/W7O65UoXw+A7SeViuSLwvo5Jc0urW0vt+pF/LazTEV98wwlJ4JPz8G5vYc8ZG7dTrPDZpthX5cxcRRN/4DblaJ2v8oIlJZ9XdDzMR/e3dV/GeXR2XVF/QzevmeMw0Dn+KHKmY6mNYtwFDaUJJri0dlC6NV6ljyiCW2G3VboIUh1ZY5E7kwD9qBboPQvxswpQd646ke4Bmkm+Z+XqLVk+UyhUOltgMFyvhHWSPdazqWpYxBlrCnPCcUpEjtOmqaT9K4OseKWyLAuBoN Xq+pD44/ ZoD8divsVhQ+kglLwkg6oDtlVEYDDWvg9UZVjG7wdSApwzAGb5g5Ke7tlkfJc+IjS5DVibfhkpAgzr3qRnzS2aCdGqrdvHTa5jMAlRtvjBgo/nCaWYSVNa3N77r/CgCrrnmJk/P5PVsd6WcSBxESfEbqaLP9bL6BiwLj2Yc1OKJV0VtHi6NdTmnyPk2N+j0yNZOkmHJTU9DoiKPVs/oDKIiJQhSgtLOd8oJmw4JzuJZkYEhWml6kmDDWpTNnfcvTqPMXPsdpYxyDWtmKzFIaU0gHMV9ENij50xlXtxkkZjQ2ujMnOaE0sTuABvjYsC1yFeEC1XqUJebOjhnjFksY/rnKeRmPkuONJSTduBtGOQ5ASKubOi6DAK5HLDGaptKCtC5KFu2bV3RAUqILrluI3uMRAdNUQt99f0Mvys3hodvOd2fulqhfuXm47PD1YRvl3W4VV9wHYxBkiQkXNX0rSTtYIsFPWFU5hFGB/WsQGeG/YseE1G9JynpCOevp0/ZBECAJpuTUXAjUEW77QZPM8/Zz+8S/1Y3/SGGrw7R26D03LP2I6rcmh+bUcVAAGUpaZcbwRxEmgO3xD7x1pX3yR1MpmgunJuls9itlhDAJDhQsdj9P0pfNceXdN4k4g1Rh+c/0bBeDnGS/zJt/TnnfKB+fAGeZxeMj0ivf4I/VKNY+JwySq8vizvX3jRtH9qUBPE3ZuVOKcucPGF+VLbiVqyrEozi3dCAS/xF3xmieSG7ecG9x74l3AXdfnD9Tjo0uvjTWw2QN5u5w1hqkiSwcrAfSUEFlsahaIEOqCklAtvRP3YkabhmJjkqQdxmmzImopHrJvNED/RMO1aj9zyrLAL7DTdS4of/3CdO9SGSAPDsCgYpAF+SU1hXuxa5tTOisxG/PLqfPB+5aqystAC7LYHaO/uN5jCaIdmHReXEbG/Vm4gN7BX/Q1aR6HKY4d9hQz9Mo6D9aXILc69LvJ6hJq3pu2syhA IOMeWzPq oM0Qlbzua6RWXuJw7neoVR0LneQlvaFOv9cX9mEoDw6pXOz4IpbxDLhD/jF2UtQtciITvxZd9kGpjQEVfjiXXNR+J2WWGVAO X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This API provides existence guarantees of objects through Hazard Pointers (hazptr). This minimalist implementation is specific to use with preemption disabled, but can be extended further as needed. Each hazptr domain defines a fixed number of hazard pointer slots (nr_cpus) across the entire system. Its main benefit over RCU is that it allows fast reclaim of HP-protected pointers without needing to wait for a grace period. It also allows the hazard pointer scan to call a user-defined callback to retire a hazard pointer slot immediately if needed. This callback may, for instance, issue an IPI to the relevant CPU. There are a few possible use-cases for this in the Linux kernel: - Improve performance of mm_count by replacing lazy active mm by hazptr. - Guarantee object existence on pointer dereference to use refcount: - replace locking used for that purpose in some drivers, - replace RCU + inc_not_zero pattern, - rtmutex: Improve situations where locks need to be taken in reverse dependency chain order by guaranteeing existence of first and second locks in traversal order, allowing them to be locked in the correct order (which is reverse from traversal order) rather than try-lock+retry on nested lock. References: [1]: M. M. Michael, "Hazard pointers: safe memory reclamation for lock-free objects," in IEEE Transactions on Parallel and Distributed Systems, vol. 15, no. 6, pp. 491-504, June 2004 Link: https://lore.kernel.org/lkml/j3scdl5iymjlxavomgc6u5ndg3svhab6ga23dr36o4f5mt333w@7xslvq6b6hmv/ Link: https://lpc.events/event/18/contributions/1731/ Signed-off-by: Mathieu Desnoyers Cc: Nicholas Piggin Cc: Michael Ellerman Cc: Greg Kroah-Hartman Cc: Sebastian Andrzej Siewior Cc: "Paul E. McKenney" Cc: Will Deacon Cc: Peter Zijlstra Cc: Boqun Feng Cc: Alan Stern Cc: John Stultz Cc: Neeraj Upadhyay Cc: Linus Torvalds Cc: Andrew Morton Cc: Boqun Feng Cc: Frederic Weisbecker Cc: Joel Fernandes Cc: Josh Triplett Cc: Uladzislau Rezki Cc: Steven Rostedt Cc: Lai Jiangshan Cc: Zqiang Cc: Ingo Molnar Cc: Waiman Long Cc: Mark Rutland Cc: Thomas Gleixner Cc: Vlastimil Babka Cc: maged.michael@gmail.com Cc: Mateusz Guzik Cc: Jonas Oberhauser Cc: rcu@vger.kernel.org Cc: linux-mm@kvack.org Cc: lkmm@lists.linux.dev --- Changes since v0: - Remove slot variable from hp_dereference_allocate(). Changes since v2: - Address Peter Zijlstra's comments. - Address Paul E. McKenney's comments. --- include/linux/hazptr.h | 165 +++++++++++++++++++++++++++++++++++++++++ kernel/Makefile | 2 +- kernel/hazptr.c | 51 +++++++++++++ 3 files changed, 217 insertions(+), 1 deletion(-) create mode 100644 include/linux/hazptr.h create mode 100644 kernel/hazptr.c diff --git a/include/linux/hazptr.h b/include/linux/hazptr.h new file mode 100644 index 000000000000..f8e36d2bdc58 --- /dev/null +++ b/include/linux/hazptr.h @@ -0,0 +1,165 @@ +// SPDX-FileCopyrightText: 2024 Mathieu Desnoyers +// +// SPDX-License-Identifier: LGPL-2.1-or-later + +#ifndef _LINUX_HAZPTR_H +#define _LINUX_HAZPTR_H + +/* + * HP: Hazard Pointers + * + * This API provides existence guarantees of objects through hazard + * pointers. + * + * It uses a fixed number of hazard pointer slots (nr_cpus) across the + * entire system for each hazard pointer domain. + * + * Its main benefit over RCU is that it allows fast reclaim of + * HP-protected pointers without needing to wait for a grace period. + * + * It also allows the hazard pointer scan to call a user-defined callback + * to retire a hazard pointer slot immediately if needed. This callback + * may, for instance, issue an IPI to the relevant CPU. + * + * References: + * + * [1]: M. M. Michael, "Hazard pointers: safe memory reclamation for + * lock-free objects," in IEEE Transactions on Parallel and + * Distributed Systems, vol. 15, no. 6, pp. 491-504, June 2004 + */ + +#include +#include + +/* + * Hazard pointer slot. + */ +struct hazptr_slot { + void *addr; +}; + +struct hazptr_domain { + struct hazptr_slot __percpu *percpu_slots; +}; + +#define DECLARE_HAZPTR_DOMAIN(domain) \ + extern struct hazptr_domain domain + +#define DEFINE_HAZPTR_DOMAIN(domain) \ + static DEFINE_PER_CPU(struct hazptr_slot, __ ## domain ## _slots); \ + struct hazptr_domain domain = { \ + .percpu_slots = &__## domain ## _slots, \ + } + +/* + * hazptr_scan: Scan hazard pointer domain for @addr. + * + * Scan hazard pointer domain for @addr. + * If @on_match_cb is NULL, wait to observe that each slot contains a value + * that differs from @addr. + * If @on_match_cb is non-NULL, invoke @on_match_cb for each slot containing + * @addr. + */ +void hazptr_scan(struct hazptr_domain *domain, void *addr, + void (*on_match_cb)(int cpu, struct hazptr_slot *slot, void *addr)); + +/* + * hazptr_try_protect: Try to protect with hazard pointer. + * + * Try to protect @addr with a hazard pointer slot. The object existence + * should be guaranteed by the caller. Expects to be called from preempt + * disable context. + * + * Returns true if protect succeeds, false otherwise. + * On success, if @_slot is not NULL, the protected hazptr slot is stored in @_slot. + */ +static inline +bool hazptr_try_protect(struct hazptr_domain *hazptr_domain, void *addr, struct hazptr_slot **_slot) +{ + struct hazptr_slot __percpu *percpu_slots = hazptr_domain->percpu_slots; + struct hazptr_slot *slot; + + if (!addr) + return false; + slot = this_cpu_ptr(percpu_slots); + /* + * A single hazard pointer slot per CPU is available currently. + * Other hazard pointer domains can eventually have a different + * configuration. + */ + if (READ_ONCE(slot->addr)) + return false; + WRITE_ONCE(slot->addr, addr); /* Store B */ + if (_slot) + *_slot = slot; + return true; +} + +/* + * hazptr_load_try_protect: Load and try to protect with hazard pointer. + * + * Load @addr_p, and try to protect the loaded pointer with hazard + * pointers. + * + * Returns a protected address on success, NULL on failure. Expects to + * be called from preempt disable context. + * + * On success, if @_slot is not NULL, the protected hazptr slot is stored in @_slot. + */ +static inline +void *__hazptr_load_try_protect(struct hazptr_domain *hazptr_domain, + void * const * addr_p, struct hazptr_slot **_slot) +{ + struct hazptr_slot *slot; + void *addr, *addr2; + + /* + * Load @addr_p to know which address should be protected. + */ + addr = READ_ONCE(*addr_p); +retry: + /* Try to protect the address by storing it into a slot. */ + if (!hazptr_try_protect(hazptr_domain, addr, &slot)) + return NULL; + /* Memory ordering: Store B before Load A. */ + smp_mb(); + /* + * Re-load @addr_p after storing it to the hazard pointer slot. + */ + addr2 = READ_ONCE(*addr_p); /* Load A */ + /* + * If @addr_p content has changed since the first load, + * retire the hazard pointer and try again. + */ + if (!ptr_eq(addr2, addr)) { + WRITE_ONCE(slot->addr, NULL); + if (!addr2) + return NULL; + addr = addr2; + goto retry; + } + if (_slot) + *_slot = slot; + /* + * Use addr2 loaded from the second READ_ONCE() to preserve + * address dependency ordering. + */ + return addr2; +} + +/* + * Use a comma expression within typeof: __typeof__((void)**(addr_p), *(addr_p)) + * to generate a compile error if addr_p is not a pointer to a pointer. + */ +#define hazptr_load_try_protect(domain, addr_p, slot_p) \ + ((__typeof__((void)**(addr_p), *(addr_p))) __hazptr_load_try_protect(domain, (void * const *) (addr_p), slot_p)) + +/* Retire the protected hazard pointer from @slot. */ +static inline +void hazptr_retire(struct hazptr_slot *slot, void *addr) +{ + WARN_ON_ONCE(slot->addr != addr); + smp_store_release(&slot->addr, NULL); +} + +#endif /* _LINUX_HAZPTR_H */ diff --git a/kernel/Makefile b/kernel/Makefile index 3c13240dfc9f..bf6ed81d5983 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -7,7 +7,7 @@ obj-y = fork.o exec_domain.o panic.o \ cpu.o exit.o softirq.o resource.o \ sysctl.o capability.o ptrace.o user.o \ signal.o sys.o umh.o workqueue.o pid.o task_work.o \ - extable.o params.o \ + extable.o params.o hazptr.o \ kthread.o sys_ni.o nsproxy.o \ notifier.o ksysfs.o cred.o reboot.o \ async.o range.o smpboot.o ucount.o regset.o ksyms_common.o diff --git a/kernel/hazptr.c b/kernel/hazptr.c new file mode 100644 index 000000000000..3f9f14afbf1d --- /dev/null +++ b/kernel/hazptr.c @@ -0,0 +1,51 @@ +// SPDX-FileCopyrightText: 2024 Mathieu Desnoyers +// +// SPDX-License-Identifier: LGPL-2.1-or-later + +/* + * hazptr: Hazard Pointers + */ + +#include +#include + +/* + * hazptr_scan: Scan hazard pointer domain for @addr. + * + * Scan hazard pointer domain for @addr. + * If @on_match_cb is non-NULL, invoke @callback for each slot containing + * @addr. + * Wait to observe that each slot contains a value that differs from + * @addr before returning. + */ +void hazptr_scan(struct hazptr_domain *hazptr_domain, void *addr, + void (*on_match_cb)(int cpu, struct hazptr_slot *slot, void *addr)) +{ + struct hazptr_slot __percpu *percpu_slots = hazptr_domain->percpu_slots; + int cpu; + + /* Should only be called from preemptible context. */ + lockdep_assert_preemption_enabled(); + + /* + * Store A precedes hazptr_scan(): it unpublishes addr (sets it to + * NULL or to a different value), and thus hides it from hazard + * pointer readers. + */ + if (!addr) + return; + /* Memory ordering: Store A before Load B. */ + smp_mb(); + /* Scan all CPUs slots. */ + for_each_possible_cpu(cpu) { + struct hazptr_slot *slot = per_cpu_ptr(percpu_slots, cpu); + + if (on_match_cb) { + if (smp_load_acquire(&slot->addr) == addr) /* Load B */ + on_match_cb(cpu, slot, addr); + } else { + /* Busy-wait if node is found. */ + smp_cond_load_acquire(&slot->addr, VAL != addr); /* Load B */ + } + } +}