From patchwork Tue Sep 17 14:33:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13806226 Received: from mail-qv1-f43.google.com (mail-qv1-f43.google.com [209.85.219.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BFB57171099; Tue, 17 Sep 2024 14:34:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726583699; cv=none; b=q9UpPvK84LmHlGqb6RhjkMt1gNs9OR+Bt8polyx46elVYHIrJivAut1ViEXkA+N8NfWGrp4pv8tl7a5BQmPkbmVlfiBZFDA2KrL9iw4u/n7ZhKQu9mwDwrCYsyJRkN1sa7xMdExIoFZ+N+ahQj/VzUq6FF4knX1mOa99qDvEzLA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726583699; c=relaxed/simple; bh=70+JTe3/30bqlGhhwCby5vVoVe2AsO0uexxaGSUepHE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=STxiesC0Mk6bXug+hCUgxtAZ7teTH1JFRZza6Jh3y7GPAQNyOLVRFqmLP92g1DxCU/p2hp6tG/+KPj1EIOqMlL4NUEQXP5NP2uPE+xgYjxaXk7ZFjkO0Qn/66qTPbOIL90npA2J8X/kCp1p8KMuPtfB1c2+rgBlHqVFO13p6mDA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=TeyyMoN2; arc=none smtp.client-ip=209.85.219.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="TeyyMoN2" Received: by mail-qv1-f43.google.com with SMTP id 6a1803df08f44-6c35427935eso30307686d6.3; Tue, 17 Sep 2024 07:34:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1726583696; x=1727188496; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=zZq7RPD+uH42F46mDIKK9Ztk0MZzott+JAYc6fmxJrg=; b=TeyyMoN24NW3hXHaL24izD4vNHhuN5OuHaFw5yE+ymh/f+TM6B8YMK9N8kYaIqtWm4 87eaLS7XI5KBwHX86vIT71rneXe2wdbG1PJr8MR+l0eWy3RLhdKqhkOlxJ3tMsxmN54R Q/FBFOqrg2V9Vlc49RtuWD+MyAg/NTCSjo5itlquyfWG7C3CTrJOFr9I9ByzMbcjDKC+ OX5zHzP7MFz/BNtMjK6DDo8fOz+rvwadatzQB32X1k/nabqSL/tZUHwlBFtVPODJ+3Wg C3biUCem10VUCwlEOuQxmLSq+CfqjT3VyLLs7/ZcMkRMSYugLPglVsrnjEmsthtfGe3b 068Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726583696; x=1727188496; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=zZq7RPD+uH42F46mDIKK9Ztk0MZzott+JAYc6fmxJrg=; b=mqU8kDCVIUiBgQjoVVQET0ExnCAu7Jve4SzCR1okQureWmpZPxAHhPffTojYEoYBr5 e9+/3dNmvy/Gk5aY+002Oetu4Kd9UlzOGxFnnzZN3dT7gBkDFShGHTRXSZXRIB7L+gVc C5aQWiBqpnsoN+bNcIMy3lOhKRgcqMSebJtdZBIdWO5si274w1IpJ341NWW2+b8LM7Ly zUnpU9qZyGbr0wIvCKhoBm+zYZEvvJQuQ6mUjW3tDJ5EYOCoBXtQ9r1PvWhTjk0iica1 /jWv+TEFlgyeXyTrf149lh8v0PSmwRwxJQaMZQ948gC2KIiX/ui8syh+8z1860oGp0W4 EK2A== X-Forwarded-Encrypted: i=1; AJvYcCWzloDCuT1mpfkfIuEsob9SjWd/Cf6hkf/Qb82sFnyExKNNIKSJH7Qd6Tbtu7v/sLS2KxCW@vger.kernel.org, AJvYcCXrV19BK0evXDIQCM7NZJnX7QlPpvrMcgEy8fQYNukD+0HrcN++AUHwANrxXIQP8zUe5eEr@vger.kernel.org X-Gm-Message-State: AOJu0Yyve76ybWtWLigPWCFamRqORIDRzjGRdkldYqTunl1NH+JmLoaV VuSw3fhJy2uKYr+y0Hcclef7paNyZAR/2CUkAjd0p/ffEteocGbi X-Google-Smtp-Source: AGHT+IF2pmEI6yhOYlc9yJ0coEnr4ACImy1rxYaWM0jfd3MyvAsabH8ssbs86MX7XRNikWoU3iGneA== X-Received: by 2002:a05:6214:498b:b0:6c5:72c0:728b with SMTP id 6a1803df08f44-6c573572609mr285955006d6.24.1726583696247; Tue, 17 Sep 2024 07:34:56 -0700 (PDT) Received: from fauth1-smtp.messagingengine.com (fauth1-smtp.messagingengine.com. [103.168.172.200]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6c58c7aefb7sm35174946d6.124.2024.09.17.07.34.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Sep 2024 07:34:55 -0700 (PDT) Received: from phl-compute-12.internal (phl-compute-12.phl.internal [10.202.2.52]) by mailfauth.phl.internal (Postfix) with ESMTP id 34B341200070; Tue, 17 Sep 2024 10:34:55 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-12.internal (MEProxy); Tue, 17 Sep 2024 10:34:55 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrudekjedgjeelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepvdeipdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehlihhnuhigqdhkvghrnhgvlhesvhhgvghrrd hkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghlrdho rhhgpdhrtghpthhtoheplhhinhhugidqmhhmsehkvhgrtghkrdhorhhgpdhrtghpthhtoh eplhhkmhhmsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohepphgruhhlmhgt kheskhgvrhhnvghlrdhorhhgpdhrtghpthhtohepfhhrvgguvghrihgtsehkvghrnhgvlh drohhrghdprhgtphhtthhopehnvggvrhgrjhdruhhprgguhhihrgihsehkvghrnhgvlhdr ohhrghdprhgtphhtthhopehjohgvlhesjhhovghlfhgvrhhnrghnuggvshdrohhrghdprh gtphhtthhopehjohhshhesjhhoshhhthhrihhplhgvthhtrdhorhhg X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 17 Sep 2024 10:34:54 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org, linux-mm@kvack.org, lkmm@vger.kernel.org Cc: "Paul E. McKenney" , Frederic Weisbecker , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Boqun Feng , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Mark Rutland , Thomas Gleixner , Kent Overstreet , Linus Torvalds , Vlastimil Babka , maged.michael@gmail.com, Neeraj Upadhyay Subject: [RFC PATCH 1/4] hazptr: Add initial implementation of hazard pointers Date: Tue, 17 Sep 2024 07:33:59 -0700 Message-ID: <20240917143402.930114-2-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240917143402.930114-1-boqun.feng@gmail.com> References: <20240917143402.930114-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Hazard pointers [1] provide a way to dynamically distribute refcounting and can be used to improve the scalability of refcounting without significant space cost. Hazard pointers are similar to RCU: they build the synchronization between two part, readers and updaters. Readers are refcount users, they acquire and release refcount. Updaters cleanup objects when there are no reader referencing them (via call_hazptr()). The difference is instead of waiting for a grace period, hazard pointers can free an object as long as there is no one referencing the object. This means for a particular workload, hazard pointers may have smaller memory footprint due to fewer pending callbacks. The synchronization between readers and updaters is built around "hazard pointer slots": a slot readers can use to store a pointer value. Reader side protection: 1. Read the value of a pointer to the target data element. 2. Store it to a hazard pointer slot. 3. Enforce full ordering (e.g. smp_mb()). 4. Re-read the original pointer, reset the slot and retry if the value changed. 5. Otherwise, the continued existence of the target data element is guaranteed. Updater side check: 1. Unpublish the target data element (e.g. setting the pointer value to NULL). 2. Enforce full ordering. 3. Read the value from a hazard pointer slot. 4. If the value doesn't match the target data element, then this slot doesn't represent a reference to it. 5. Otherwise, updater needs to re-check (step 3). To distribute the accesses of hazptr slots from different contexts, hazptr_context is introduced. Users need to define/allocate their own hazptr_context to allocate hazard pointer slots. For the updater side to confirm no existing reference, it needs to scan all the possible slots, and to speed up this process, hazptr_context also contains an rbtree node for each slot so that updater can cache the reader-scan result in an rbtree. The rbtree nodes are pre-allocated in this way to prevent "allocate memory to free memory" in extreme cases. [1]: M. M. Michael, "Hazard pointers: safe memory reclamation for lock-free objects," in IEEE Transactions on Parallel and Distributed Systems, vol. 15, no. 6, pp. 491-504, June 2004 Co-developed-by: Paul E. McKenney Signed-off-by: Paul E. McKenney Co-developed-by: Neeraj Upadhyay Signed-off-by: Neeraj Upadhyay Signed-off-by: Boqun Feng --- include/linux/hazptr.h | 83 ++++++++ kernel/Makefile | 1 + kernel/hazptr.c | 463 +++++++++++++++++++++++++++++++++++++++++ 3 files changed, 547 insertions(+) create mode 100644 include/linux/hazptr.h create mode 100644 kernel/hazptr.c diff --git a/include/linux/hazptr.h b/include/linux/hazptr.h new file mode 100644 index 000000000000..4548ca8c75eb --- /dev/null +++ b/include/linux/hazptr.h @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _LINUX_HAZPTR_H +#define _LINUX_HAZPTR_H + +#include +#include +#include + +/* Hazard pointer slot. */ +typedef void* hazptr_t; + +#define HAZPTR_SLOT_PER_CTX 8 + +struct hazptr_slot_snap { + struct rb_node node; + hazptr_t slot; +}; + +/* + * A set of hazard pointer slots for a context. + * + * The context can be a task, CPU or reader (depends on the use case). It may + * be allocated statically or dynamically. It can only be used after calling + * init_hazptr_context(), and users are also responsible to call + * cleaup_hazptr_context() when it's not used any more. + */ +struct hazptr_context { + // The lock of the percpu context lists. + spinlock_t *lock; + + struct list_head list; + struct hazptr_slot_snap snaps[HAZPTR_SLOT_PER_CTX]; + ____cacheline_aligned hazptr_t slots[HAZPTR_SLOT_PER_CTX]; +}; + +void init_hazptr_context(struct hazptr_context *hzcp); +void cleanup_hazptr_context(struct hazptr_context *hzcp); +hazptr_t *hazptr_alloc(struct hazptr_context *hzcp); +void hazptr_free(struct hazptr_context *hzcp, hazptr_t *hzp); + +#define hazptr_tryprotect(hzp, gp, field) (typeof(gp))__hazptr_tryprotect(hzp, (void **)&(gp), offsetof(typeof(*gp), field)) +#define hazptr_protect(hzp, gp, field) ({ \ + typeof(gp) ___p; \ + \ + ___p = hazptr_tryprotect(hzp, gp, field); \ + BUG_ON(!___p); \ + ___p; \ +}) + +static inline void *__hazptr_tryprotect(hazptr_t *hzp, + void *const *p, + unsigned long head_offset) +{ + void *ptr; + struct callback_head *head; + + ptr = READ_ONCE(*p); + + if (ptr == NULL) + return NULL; + + head = (struct callback_head *)(ptr + head_offset); + + WRITE_ONCE(*hzp, head); + smp_mb(); + + ptr = READ_ONCE(*p); // read again + + if (ptr + head_offset != head) { // pointer changed + WRITE_ONCE(*hzp, NULL); // reset hazard pointer + return NULL; + } else + return ptr; +} + +static inline void hazptr_clear(hazptr_t *hzp) +{ + /* Pairs with smp_load_acquire() in reader scan. */ + smp_store_release(hzp, NULL); +} + +void call_hazptr(struct callback_head *head, rcu_callback_t func); +#endif diff --git a/kernel/Makefile b/kernel/Makefile index 3c13240dfc9f..7927264b9870 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -50,6 +50,7 @@ obj-y += rcu/ obj-y += livepatch/ obj-y += dma/ obj-y += entry/ +obj-y += hazptr.o obj-$(CONFIG_MODULES) += module/ obj-$(CONFIG_KCMP) += kcmp.o diff --git a/kernel/hazptr.c b/kernel/hazptr.c new file mode 100644 index 000000000000..f22ccc2a4a62 --- /dev/null +++ b/kernel/hazptr.c @@ -0,0 +1,463 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#include +#include +#include +#include +#include + +#define HAZPTR_UNUSED (1ul) + +/* Per-CPU data for hazard pointer management. */ +struct hazptr_percpu { + spinlock_t ctx_lock; /* hazptr context list lock */ + struct list_head ctx_list; /* hazptr context list */ + spinlock_t callback_lock; /* Per-CPU callback queue lock */ + struct callback_head *queued; /* Per-CPU callback queue */ + struct callback_head **tail; +}; + +/* + * Per-CPU data contains context lists and callbacks, which are maintained in a + * per-CPU maner to reduce potential contention. This means a global scan (for + * readers or callbacks) has to take each CPU's lock, but it's less problematic + * because that's a slowpath. + */ +DEFINE_PER_CPU(struct hazptr_percpu, hzpcpu); + +/* A RBTree that stores the reader scan result of all hazptr slot */ +struct hazptr_reader_tree { + spinlock_t lock; + struct rb_root root; +}; + +static void init_hazptr_reader_tree(struct hazptr_reader_tree *tree) +{ + spin_lock_init(&tree->lock); + tree->root = RB_ROOT; +} + +static bool is_null_or_unused(hazptr_t slot) +{ + return slot == NULL || ((unsigned long)slot) == HAZPTR_UNUSED; +} + +static int hazptr_node_cmp(const void *key, const struct rb_node *curr) +{ + unsigned long slot = (unsigned long)key; + struct hazptr_slot_snap *curr_snap = container_of(curr, struct hazptr_slot_snap, node); + unsigned long curr_slot = (unsigned long)(curr_snap->slot); + + if (slot < curr_slot) + return -1; + else if (slot > curr_slot) + return 1; + else + return 0; +} + +static bool hazptr_node_less(struct rb_node *new, const struct rb_node *curr) +{ + struct hazptr_slot_snap *new_snap = container_of(new, struct hazptr_slot_snap, node); + + return hazptr_node_cmp((void *)new_snap->slot, curr) < 0; +} + +/* Add the snapshot into a search tree. tree->lock must be held. */ +static inline void reader_add_locked(struct hazptr_reader_tree *tree, + struct hazptr_slot_snap *snap) +{ + lockdep_assert_held(&tree->lock); + BUG_ON(is_null_or_unused(snap->slot)); + + rb_add(&snap->node, &tree->root, hazptr_node_less); +} + +static inline void reader_add(struct hazptr_reader_tree *tree, + struct hazptr_slot_snap *snap) +{ + guard(spinlock_irqsave)(&tree->lock); + + reader_add_locked(tree, snap); +} + +/* Delete the snapshot from a search tree. tree->lock must be held. */ +static inline void reader_del_locked(struct hazptr_reader_tree *tree, + struct hazptr_slot_snap *snap) +{ + lockdep_assert_held(&tree->lock); + + rb_erase(&snap->node, &tree->root); +} + +static inline void reader_del(struct hazptr_reader_tree *tree, + struct hazptr_slot_snap *snap) +{ + guard(spinlock_irqsave)(&tree->lock); + + reader_del_locked(tree, snap); +} + +/* Find whether a read exists. tree->lock must be held. */ +static inline bool reader_exist_locked(struct hazptr_reader_tree *tree, + unsigned long slot) +{ + lockdep_assert_held(&tree->lock); + + return !!rb_find((void *)slot, &tree->root, hazptr_node_cmp); +} + +static inline bool reader_exist(struct hazptr_reader_tree *tree, + unsigned long slot) +{ + guard(spinlock_irqsave)(&tree->lock); + + return reader_exist_locked(tree, slot); +} + +/* + * Scan the readers from one hazptr context and update the global readers tree. + * + * Must be called with hzcp->lock held. + */ +static void hazptr_context_snap_readers_locked(struct hazptr_reader_tree *tree, + struct hazptr_context *hzcp) +{ + lockdep_assert_held(hzcp->lock); + + for (int i = 0; i < HAZPTR_SLOT_PER_CTX; i++) { + /* + * Pairs with smp_store_release() in hazptr_{clear,free}(). + * + * Ensure + * + * + * + * [access protected pointers] + * hazptr_clear(); + * smp_store_release() + * // in reader scan. + * smp_load_acquire(); // is null or unused. + * [run callbacks] // all accesses from + * // reader must be + * // observed. + */ + hazptr_t val = smp_load_acquire(&hzcp->slots[i]); + + if (!is_null_or_unused(val)) { + struct hazptr_slot_snap *snap = &hzcp->snaps[i]; + + // Already in the tree, need to remove first. + if (!is_null_or_unused(snap->slot)) { + reader_del(tree, snap); + } + snap->slot = val; + reader_add(tree, snap); + } + } +} + +/* + * Initialize a hazptr context. + * + * Must be called before using the context for hazptr allocation. + */ +void init_hazptr_context(struct hazptr_context *hzcp) +{ + struct hazptr_percpu *pcpu = this_cpu_ptr(&hzpcpu); + + for (int i = 0; i < HAZPTR_SLOT_PER_CTX; i++) { + hzcp->slots[i] = (hazptr_t)HAZPTR_UNUSED; + hzcp->snaps[i].slot = (hazptr_t)HAZPTR_UNUSED; + } + + guard(spinlock_irqsave)(&pcpu->ctx_lock); + list_add(&hzcp->list, &pcpu->ctx_list); + hzcp->lock = &pcpu->ctx_lock; +} + +struct hazptr_struct { + struct work_struct work; + bool scheduled; + + // list of callbacks, we move percpu queued callbacks into the global + // queued list in workqueue function. + spinlock_t callback_lock; + struct callback_head *queued; + struct callback_head **tail; + + struct hazptr_reader_tree readers; +}; + +static struct hazptr_struct hazptr_struct; + +/* + * Clean up hazptr context. + * + * Must call before freeing the context. This function also removes any + * reference held by the hazard pointer slots in the context, even + * hazptr_clear() or hazptr_free() is not called previously. + */ +void cleanup_hazptr_context(struct hazptr_context *hzcp) +{ + if (hzcp->lock) { + scoped_guard(spinlock_irqsave, hzcp->lock) { + list_del(&hzcp->list); + hzcp->lock = NULL; + } + + for (int i = 0; i < HAZPTR_SLOT_PER_CTX; i++) { + struct hazptr_slot_snap *snap = &hzcp->snaps[i]; + + if (!is_null_or_unused(snap->slot)) + reader_del(&hazptr_struct.readers, snap); + } + } +} + +/* + * Allocate a hazptr slot from a hazptr_context. + * + * Return: NULL means fail to allocate, otherwise the address of the allocated + * slot. + */ +hazptr_t *hazptr_alloc(struct hazptr_context *hzcp) +{ + unsigned long unused; + + for (int i = 0; i < HAZPTR_SLOT_PER_CTX; i++) { + if (((unsigned long)READ_ONCE(hzcp->slots[i])) == HAZPTR_UNUSED) { + unused = HAZPTR_UNUSED; + + /* + * rwm-sequence is relied on here. + * + * This is needed since in case of a previous reader: + * + * + * [access protected pointers] + * hazptr_free(): + * smp_store_release(); // hzptr == UNUSED + * hazptr_alloc(): + * try_cmpxchg_relaxed(); // hzptr == NULL + * + * // in reader scan + * smp_load_acquire(); // is null + * [run callbacks] + * + * Because of the rwm-sequence of release operations, + * when callbacks are run, accesses from reader 1 must + * be already observed by the updater. + */ + if (try_cmpxchg_relaxed(&hzcp->slots[i], &unused, NULL)) { + return (hazptr_t *)&hzcp->slots[i]; + } + } + } + + return NULL; +} + +/* Free a hazptr slot. */ +void hazptr_free(struct hazptr_context *hzcp, hazptr_t *hzptr) +{ + WARN_ON(((unsigned long)*hzptr) == HAZPTR_UNUSED); + + /* Pairs with smp_load_acquire() in reader scan */ + smp_store_release(hzptr, (void *)HAZPTR_UNUSED); +} + +/* Scan all possible readers, and update the global reader tree. */ +static void check_readers(struct hazptr_struct *hzst) +{ + int cpu; + + for_each_possible_cpu(cpu) { + struct hazptr_percpu *pcpu = per_cpu_ptr(&hzpcpu, cpu); + struct hazptr_context *ctx; + + guard(spinlock_irqsave)(&pcpu->ctx_lock); + list_for_each_entry(ctx, &pcpu->ctx_list, list) + hazptr_context_snap_readers_locked(&hzst->readers, ctx); + } +} + +/* + * Start the background work for hazptr callback handling if not started. + * + * Must be called with hazptr_struct lock held. + */ +static void kick_hazptr_work(void) +{ + if (hazptr_struct.scheduled) + return; + + queue_work(system_wq, &hazptr_struct.work); + hazptr_struct.scheduled = true; +} + +/* + * Check which callbacks are ready to be called. + * + * Return: a callback list that no reader is referencing the corresponding + * objects. + */ +static struct callback_head *do_hazptr(struct hazptr_struct *hzst) +{ + struct callback_head *tmp, **curr; + struct callback_head *todo = NULL, **todo_tail = &todo; + + // Currently, the lock is unnecessary, but maybe we will have multiple + // work_structs sharing the same callback list in the future; + guard(spinlock_irqsave)(&hzst->callback_lock); + + curr = &hzst->queued; + + while ((tmp = *curr)) { + // No reader, we can free the object. + if (!reader_exist(&hzst->readers, (unsigned long)tmp)) { + // Add tmp into todo. + *todo_tail = tmp; + todo_tail = &tmp->next; + + // Delete tmp from ->queued and move to the next entry. + *curr = tmp->next; + } else + curr = &tmp->next; + } + + hzst->tail = curr; + + // Keep checking if callback list is not empty. + if (hzst->queued) + kick_hazptr_work(); + + *todo_tail = NULL; + + return todo; +} + +static void hazptr_work_func(struct work_struct *work) +{ + struct hazptr_struct *hzst = container_of(work, struct hazptr_struct, work); + struct callback_head *todo; + + // Advance callbacks from hzpcpu to hzst + scoped_guard(spinlock_irqsave, &hzst->callback_lock) { + int cpu; + + hzst->scheduled = false; + for_each_possible_cpu(cpu) { + struct hazptr_percpu *pcpu = per_cpu_ptr(&hzpcpu, cpu); + + guard(spinlock)(&pcpu->callback_lock); + + if (pcpu->queued) { + *(hzst->tail) = pcpu->queued; + hzst->tail = pcpu->tail; + pcpu->queued = NULL; + pcpu->tail = &pcpu->queued; + } + } + } + + // Pairs with the smp_mb() on the reader side. This guarantees that if + // the hazptr work picked up the callback from an updater and the + // updater set the global pointer to NULL before enqueue the callback, + // the hazptr work must observe a reader that protects the hazptr before + // the updater. + // + // + // ptr = READ_ONCE(gp); + // WRITE_ONCE(*hazptr, ptr); + // smp_mb(); // => ->strong-fence + // tofree = gp; + // ptr = READ_ONCE(gp); // re-read, gp is not NULL + // // => ->fre + // WRITE_ONCE(gp, NULL); + // call_hazptr(gp, ...): + // lock(->callback_lock); + // [queued the callback] + // unlock(->callback_lock);// => ->po-unlock-lock-po + // lock(->callback_lock); + // [move from hzpcpu to hzst] + // + // smp_mb(); => ->strong-fence + // ptr = READ_ONCE(*hazptr); + // // ptr == gp, otherwise => ->fre + // + // If ptr != gp, it means there exists a circle with the following + // memory ordering relationships: + // ->strong-fence ->fre ->po-unlock-lock-po ->strong-fence ->fre + // => (due to the definition of prop) + // ->strong-fence ->prop ->strong-fence ->hb ->prop + // => (rotate the circle) + // ->prop ->strong-fence ->prop ->strong-fence ->hb + // => (due to the definition of pb) + // ->pb ->pb + // but pb is acyclic according to LKMM, so this cannot happen. + smp_mb(); + check_readers(hzst); + + todo = do_hazptr(hzst); + + while (todo) { + struct callback_head *next = todo->next; + void (*func)(struct callback_head *) = todo->func; + + func(todo); + + todo = next; + } +} + +/* + * Put @head into the cleanup callback queue. + * + * @func(@head) will be called after no one is referencing the object. Caller + * must ensure the object has been unpublished before calling this. + */ +void call_hazptr(struct callback_head *head, rcu_callback_t func) +{ + struct hazptr_percpu *pcpu = this_cpu_ptr(&hzpcpu); + + head->func = func; + head->next = NULL; + + scoped_guard(spinlock_irqsave, &pcpu->callback_lock) { + *(pcpu->tail) = head; + pcpu->tail = &head->next; + } + + guard(spinlock_irqsave)(&hazptr_struct.callback_lock); + kick_hazptr_work(); +} + +static int init_hazptr_struct(void) +{ + int cpu; + + INIT_WORK(&hazptr_struct.work, hazptr_work_func); + + spin_lock_init(&hazptr_struct.callback_lock); + hazptr_struct.queued = NULL; + hazptr_struct.tail = &hazptr_struct.queued; + + for_each_possible_cpu(cpu) { + struct hazptr_percpu *pcpu = per_cpu_ptr(&hzpcpu, cpu); + + spin_lock_init(&pcpu->ctx_lock); + INIT_LIST_HEAD(&pcpu->ctx_list); + + spin_lock_init(&pcpu->callback_lock); + pcpu->queued = NULL; + pcpu->tail = &pcpu->queued; + + } + + init_hazptr_reader_tree(&hazptr_struct.readers); + + return 0; +} + +early_initcall(init_hazptr_struct); From patchwork Tue Sep 17 14:34:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13806227 Received: from mail-qk1-f175.google.com (mail-qk1-f175.google.com [209.85.222.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DECB4176FD3; Tue, 17 Sep 2024 14:34:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726583700; cv=none; b=eltnrIapkmpUYAgkX0ljwdBrajDJMe3aWjGdtGOWAcABC8PScUbCBqXcsGsV+1CP3cgnhKh0mqxod3CVJmjVRIoqo1msV4XP54vzvUMZzMTqTL7O/k3UtHBySfPNM41Mjzp9t2Kw1BZmmrSRnSG/b1f3AGL2w7UEI3Mma253e90= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726583700; c=relaxed/simple; bh=tf1CY5ZmD1DQM6dNPOMbMR1e/p//o3ydrleK68SPShI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Rt6zzYYc2Rwd19qznLVS73aTX1RcizmJxAQ0lqLBDFgukmDtIgBL1oNe9ZN1bDYfDN6PeqDFeQjn3UaUFoaWMPrUNcXJPJvFc5MnstvIJUiGyobjgvZNjFtnRAbv0X57gmZqpsuukaVR+OhWMAGjqhBbQqBcd5fCF8zuENXm9vY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=FEF1uXX9; arc=none smtp.client-ip=209.85.222.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="FEF1uXX9" Received: by mail-qk1-f175.google.com with SMTP id af79cd13be357-7a9ab721058so587741385a.1; Tue, 17 Sep 2024 07:34:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1726583698; x=1727188498; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=IDKpmr/a+FSU5QkWBrZE0xc7m3+8vAVQA9drcgUYXjs=; b=FEF1uXX977lbgsZi7MBDXwb7QX0kq/Ibo+QUJ4IVlS22kpyx0U2E7td2/bxAr1s8yq p0Vd1FpPv7dWUIOHpI6sh23xkOwFUAzH+ss7cehliJA6q91ZPr5fdHd2PFYjB3su32UC UFtNWseCp2mhM0bQpNwMl9yTJ6NR8Jo77QLl2xiOh7bDKcpmvJv4SLd/ADT57UMSfulu 6kn9kGCAUlWe2mdNwc+9kcLyxIyXFq0JAQr0PVWBnH8xmK102AikO64uyawj4zrvY1hn thrqomlg6TcIdEPpe2jVr2n2ipPUL0NhzR6meR43aqH017D/fJs539JnuwbE5uyKYyI2 U3zA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726583698; x=1727188498; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=IDKpmr/a+FSU5QkWBrZE0xc7m3+8vAVQA9drcgUYXjs=; b=wss75PUI8UYHQt++ybB1QbJfhRPXZEmtvdiO+mt8CytHkh1IOLioXjHgc6cKgCYZov 3w4/v3LtL4Zek0t7LUAUwuyvIxukuAx91N4zBwkyOIdp2jvJAWrixNc7lDoL83MGaC2I 5W7qokNJLvSjjFVZ93LluZI/sRKM4haHeIs45BDuHISZxQZYEZg9BEMImwj5xyXX4kzr 6MtB6oJZqx4Rmj2XkdM9CuZ0u/iLD/hK/36S0sXcautClcclTylkVExkUpRbbVH7Q1J2 JrylCDxMHtBbBd9HQRtmZXZjSpI6+Sjik2La17MUvtArry+wF4PUsyMXZTMP4hSPR/jk SqBA== X-Forwarded-Encrypted: i=1; AJvYcCVPLRn6Wt8QXZQ9cXYvN12nqoTR33Eu8A+s90tMj/GWx2AQVUiKyQZpxMifYQ8wP1GFK7B5@vger.kernel.org, AJvYcCXGAL9JEga4RhhIPa9dNx9m/32zwPXuY7DjfkDkvCoqqCI3JPDmJ888fHYrBigPtxw+bJhq@vger.kernel.org X-Gm-Message-State: AOJu0Yy0gaIJ9dzbvjqmj24d99z9w1zK+Do1ydCWYlk6R+9GfBKr72nV aQPtapbK119GyhwH/KsIIipspJ9kRy20u1CUwUVch7mGOkXXpmo7 X-Google-Smtp-Source: AGHT+IG2fokLCq1TmoMpmwBtIu0E6iGiVOGurVZcFpBIdhqlPDe2hAN6K3EzM3w6osDlDfCf58pZgw== X-Received: by 2002:a05:620a:450c:b0:7a4:d56a:a928 with SMTP id af79cd13be357-7a9bf9abe50mr4404654785a.21.1726583697681; Tue, 17 Sep 2024 07:34:57 -0700 (PDT) Received: from fauth1-smtp.messagingengine.com (fauth1-smtp.messagingengine.com. [103.168.172.200]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7ab3e9b39cesm363636485a.65.2024.09.17.07.34.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Sep 2024 07:34:57 -0700 (PDT) Received: from phl-compute-05.internal (phl-compute-05.phl.internal [10.202.2.45]) by mailfauth.phl.internal (Postfix) with ESMTP id B5DBB1200070; Tue, 17 Sep 2024 10:34:56 -0400 (EDT) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-05.internal (MEProxy); Tue, 17 Sep 2024 10:34:56 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrudekjedgjeelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepvdehpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehlihhnuhigqdhkvghrnhgvlhesvhhgvghrrd hkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghlrdho rhhgpdhrtghpthhtoheplhhinhhugidqmhhmsehkvhgrtghkrdhorhhgpdhrtghpthhtoh eplhhkmhhmsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohepphgruhhlmhgt kheskhgvrhhnvghlrdhorhhgpdhrtghpthhtohepfhhrvgguvghrihgtsehkvghrnhgvlh drohhrghdprhgtphhtthhopehnvggvrhgrjhdruhhprgguhhihrgihsehkvghrnhgvlhdr ohhrghdprhgtphhtthhopehjohgvlhesjhhovghlfhgvrhhnrghnuggvshdrohhrghdprh gtphhtthhopehjohhshhesjhhoshhhthhrihhplhgvthhtrdhorhhg X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 17 Sep 2024 10:34:56 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org, linux-mm@kvack.org, lkmm@vger.kernel.org Cc: "Paul E. McKenney" , Frederic Weisbecker , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Boqun Feng , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Mark Rutland , Thomas Gleixner , Kent Overstreet , Linus Torvalds , Vlastimil Babka , maged.michael@gmail.com Subject: [RFC PATCH 2/4] refscale: Add benchmarks for hazptr Date: Tue, 17 Sep 2024 07:34:00 -0700 Message-ID: <20240917143402.930114-3-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240917143402.930114-1-boqun.feng@gmail.com> References: <20240917143402.930114-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Benchmarks for hazptr are added to evaluate the reader side performance between hazptr and other refcounting mechanisms. Signed-off-by: Boqun Feng --- kernel/rcu/refscale.c | 79 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 78 insertions(+), 1 deletion(-) diff --git a/kernel/rcu/refscale.c b/kernel/rcu/refscale.c index f4ea5b1ec068..7e76ae5159e6 100644 --- a/kernel/rcu/refscale.c +++ b/kernel/rcu/refscale.c @@ -35,6 +35,7 @@ #include #include #include +#include #include "rcu.h" @@ -316,6 +317,82 @@ static struct ref_scale_ops refcnt_ops = { .name = "refcnt" }; +struct hazptr_data { + struct callback_head head; + int i; +}; + +static struct hazptr_data *hazptr_data; + +static bool hazptr_scale_init(void) +{ + hazptr_data = kmalloc(sizeof(*hazptr_data), GFP_KERNEL); + + return !!hazptr_data; +} + +static void free_hazptr_data(struct callback_head *head) +{ + struct hazptr_data *tofree = container_of(head, struct hazptr_data, head); + + kfree(tofree); +} + +static void hazptr_scale_cleanup(void) +{ + if (hazptr_data) { + struct hazptr_data *tmp = hazptr_data; + WRITE_ONCE(hazptr_data, NULL); + + call_hazptr(&tmp->head, free_hazptr_data); + } +} + +static void hazptr_section(const int nloops) +{ + int i; + struct hazptr_context ctx; + hazptr_t *hzptr; + + init_hazptr_context(&ctx); + hzptr = hazptr_alloc(&ctx); + + for (i = nloops; i >= 0; i--) { + BUG_ON(!hazptr_protect(hzptr, hazptr_data, head)); + hazptr_clear(hzptr); + } + + hazptr_free(&ctx, hzptr); + cleanup_hazptr_context(&ctx); +} + +static void hazptr_delay_section(const int nloops, const int udl, const int ndl) +{ + int i; + struct hazptr_context ctx; + hazptr_t *hzptr; + + init_hazptr_context(&ctx); + hzptr = hazptr_alloc(&ctx); + + for (i = nloops; i >= 0; i--) { + BUG_ON(!hazptr_protect(hzptr, hazptr_data, head)); + un_delay(udl, ndl); + hazptr_clear(hzptr); + } + + hazptr_free(&ctx, hzptr); + cleanup_hazptr_context(&ctx); +} + +static struct ref_scale_ops hazptr_ops = { + .init = hazptr_scale_init, + .cleanup = hazptr_scale_cleanup, + .readsection = hazptr_section, + .delaysection = hazptr_delay_section, + .name = "hazptr" +}; + // Definitions for rwlock static rwlock_t test_rwlock; @@ -1081,7 +1158,7 @@ ref_scale_init(void) static struct ref_scale_ops *scale_ops[] = { &rcu_ops, &srcu_ops, RCU_TRACE_OPS RCU_TASKS_OPS &refcnt_ops, &rwlock_ops, &rwsem_ops, &lock_ops, &lock_irq_ops, &acqrel_ops, &clock_ops, &jiffies_ops, - &typesafe_ref_ops, &typesafe_lock_ops, &typesafe_seqlock_ops, + &typesafe_ref_ops, &typesafe_lock_ops, &typesafe_seqlock_ops, &hazptr_ops, }; if (!torture_init_begin(scale_type, verbose)) From patchwork Tue Sep 17 14:34:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13806228 Received: from mail-yb1-f179.google.com (mail-yb1-f179.google.com [209.85.219.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AF43317B513; Tue, 17 Sep 2024 14:35:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726583702; cv=none; b=bxmR8dmQfT5AdELLN0qv8Qk1OHryxtJeBWZBor03ctDMAUqZIeZ+266J/fFyXRuXH8xZBHIuKiJZAO2HL4iYgxGUXFfn0jMEcHWh0S0qZZ8zL3rt8nvVwpxfb7pWX2X47yn0aVGTM6vHXSIRiIcLTeY5Z6XiSn2ZV3W4QCsLfoc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726583702; c=relaxed/simple; bh=Zjmy1aEbU0/OgcKVrk4j8BQEKyIgLpLw6wRloA5pGP0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PasehkP5Jz4YKOZupyX05nid2di9VJB3705pV/pR7zbeHZrm50V3v20LT9xDTMspYNr49tO4J0/64WdAEO9ndZ1W/oqUgezD417nOAZ0ZGU1e9lWtOivehBNssQXjEZIL/iNQNxje9xXA3CAkYBlbCiMG+AdfP7lKhxbmvdmNVk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=RIYZoqz9; arc=none smtp.client-ip=209.85.219.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="RIYZoqz9" Received: by mail-yb1-f179.google.com with SMTP id 3f1490d57ef6-e1a80979028so5599723276.1; Tue, 17 Sep 2024 07:35:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1726583700; x=1727188500; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=GcfRMZ3iyMDLyZX4twHVpqBTdlI1aOiND6/+aYkmmFI=; b=RIYZoqz9KnW6AYzNf+GfgSYG0ztKNQpepJogAfk0lkLFxK24jMjreddtLJxhKM7w2h 1VdhkcQpSa3ANXWNNM5rOMdQKID9OKir2MLosShYON+eKwkmoPwkLMagPCgkltvpCVov NC7Tk4Q46TUkF9SFtn0EJGVYPPMPUS6Ybhap6iNq41EIa9Z5bbuz9zPDSgSOeuh8zHFr 8VxLQ93MIQYbgDFOksnsu4IsTpJFIpiUy9OpVUurMpiMTkZ3s7XX7QCFIALaKL1kPAOC gXTXsYLcPhZI4aOEuNsWL4+hrTS/DjjKf73sAprAcEH8W0aCQVb9CKbFE6y2yaHEC3q+ LXzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726583700; x=1727188500; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=GcfRMZ3iyMDLyZX4twHVpqBTdlI1aOiND6/+aYkmmFI=; b=Tmo3yIK6KJEcZbotFriS9geWKHz8IOY4a+8kQmsOfmtyH0jMuuwNIhCnBsxhmzjQ88 aHOW/1KMWJdWrFSomSi+iqP77rPTToJ8UaUDh+BKYssGkL2mZmVdFf2KjM7Y/Klhsr+3 3OtM2Gm2YxUWmyDb/kaX8E6hnyoN4ZgJBL42RkZ214FoF8wOwDCv5Lo4D63S8QNGOD9k Ms0ED+qQ6453bUqQUdfaKSSScGwtKdkvE4otWOcaAxPGb0EQCUQqmeGs7cHh5G3NfTUF dHtQ9waj7zqVmJ2pCRWFxoezPwh4V8GedGldYFEfuxoXVXnXHd1w4Esw8/L1Ux4vUXrb Pb7w== X-Forwarded-Encrypted: i=1; AJvYcCXO956lhcokio8o6kiogyoklYsHwk4H9lsQm7lwSmcJa8hxPOJrNgxh3fWIGy4RFtqarquA@vger.kernel.org, AJvYcCXT0aPSjvSy2ywcGoTjYbyozISPP4x5wEW/z8EcHyTRpVpYjmd9p1QedOze6i61TKnWs1vt@vger.kernel.org X-Gm-Message-State: AOJu0Yxx7kvJNAVcsk5OcHm1qlgkc2U1zL6kM0DLle1g9tjiKLfBIetZ vRgT404YrCIqIyYKrk7eFQgQDEvZFh7TSc0lCv4tYLXcwkrDQnU5 X-Google-Smtp-Source: AGHT+IGuBl7w6kHeocxSsrq7wz1PL/D3xhcWK31fEFq1kkjp/eaeFPUIN6wONAeMPCU446DmR2QvTw== X-Received: by 2002:a05:6902:cc7:b0:e13:ceaa:59d9 with SMTP id 3f1490d57ef6-e1d9dc41795mr12290884276.41.1726583699724; Tue, 17 Sep 2024 07:34:59 -0700 (PDT) Received: from fauth1-smtp.messagingengine.com (fauth1-smtp.messagingengine.com. [103.168.172.200]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6c58c7da98bsm34819796d6.145.2024.09.17.07.34.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Sep 2024 07:34:59 -0700 (PDT) Received: from phl-compute-04.internal (phl-compute-04.phl.internal [10.202.2.44]) by mailfauth.phl.internal (Postfix) with ESMTP id 7E0841200070; Tue, 17 Sep 2024 10:34:58 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-04.internal (MEProxy); Tue, 17 Sep 2024 10:34:58 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrudekjedgjeelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepvdehpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehlihhnuhigqdhkvghrnhgvlhesvhhgvghrrd hkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghlrdho rhhgpdhrtghpthhtoheplhhinhhugidqmhhmsehkvhgrtghkrdhorhhgpdhrtghpthhtoh eplhhkmhhmsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohepphgruhhlmhgt kheskhgvrhhnvghlrdhorhhgpdhrtghpthhtohepfhhrvgguvghrihgtsehkvghrnhgvlh drohhrghdprhgtphhtthhopehnvggvrhgrjhdruhhprgguhhihrgihsehkvghrnhgvlhdr ohhrghdprhgtphhtthhopehjohgvlhesjhhovghlfhgvrhhnrghnuggvshdrohhrghdprh gtphhtthhopehjohhshhesjhhoshhhthhrihhplhgvthhtrdhorhhg X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 17 Sep 2024 10:34:57 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org, linux-mm@kvack.org, lkmm@vger.kernel.org Cc: "Paul E. McKenney" , Frederic Weisbecker , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Boqun Feng , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Mark Rutland , Thomas Gleixner , Kent Overstreet , Linus Torvalds , Vlastimil Babka , maged.michael@gmail.com Subject: [RFC PATCH 3/4] refscale: Add benchmarks for percpu_ref Date: Tue, 17 Sep 2024 07:34:01 -0700 Message-ID: <20240917143402.930114-4-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240917143402.930114-1-boqun.feng@gmail.com> References: <20240917143402.930114-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Benchmarks for percpu_ref are added to evaluate the reader side performance between percpu_ref and other refcounting mechanisms. Signed-off-by: Boqun Feng --- kernel/rcu/refscale.c | 50 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 49 insertions(+), 1 deletion(-) diff --git a/kernel/rcu/refscale.c b/kernel/rcu/refscale.c index 7e76ae5159e6..97b73c980c5d 100644 --- a/kernel/rcu/refscale.c +++ b/kernel/rcu/refscale.c @@ -393,6 +393,54 @@ static struct ref_scale_ops hazptr_ops = { .name = "hazptr" }; +struct percpu_ref percpu_ref; + +static void percpu_ref_dummy(struct percpu_ref *ref) {} + +static bool percpu_ref_scale_init(void) +{ + int ret; + + ret = percpu_ref_init(&percpu_ref, percpu_ref_dummy, PERCPU_REF_INIT_ATOMIC, GFP_KERNEL); + percpu_ref_switch_to_percpu(&percpu_ref); + + return !ret; +} + +static void percpu_ref_scale_cleanup(void) +{ + percpu_ref_exit(&percpu_ref); +} + +static void percpu_ref_section(const int nloops) +{ + int i; + + for (i = nloops; i >= 0; i--) { + percpu_ref_get(&percpu_ref); + percpu_ref_put(&percpu_ref); + } +} + +static void percpu_ref_delay_section(const int nloops, const int udl, const int ndl) +{ + int i; + + for (i = nloops; i >= 0; i--) { + percpu_ref_get(&percpu_ref); + un_delay(udl, ndl); + percpu_ref_put(&percpu_ref); + } +} + +static struct ref_scale_ops percpu_ref_ops = { + .init = percpu_ref_scale_init, + .cleanup = percpu_ref_scale_cleanup, + .readsection = percpu_ref_section, + .delaysection = percpu_ref_delay_section, + .name = "percpu_ref" +}; + // Definitions for rwlock static rwlock_t test_rwlock; @@ -1158,7 +1206,7 @@ ref_scale_init(void) static struct ref_scale_ops *scale_ops[] = { &rcu_ops, &srcu_ops, RCU_TRACE_OPS RCU_TASKS_OPS &refcnt_ops, &rwlock_ops, &rwsem_ops, &lock_ops, &lock_irq_ops, &acqrel_ops, &clock_ops, &jiffies_ops, - &typesafe_ref_ops, &typesafe_lock_ops, &typesafe_seqlock_ops, &hazptr_ops, + &typesafe_ref_ops, &typesafe_lock_ops, &typesafe_seqlock_ops, &hazptr_ops, &percpu_ref_ops, }; if (!torture_init_begin(scale_type, verbose)) From patchwork Tue Sep 17 14:34:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13806229 Received: from mail-qv1-f44.google.com (mail-qv1-f44.google.com [209.85.219.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0C5F017C9E9; Tue, 17 Sep 2024 14:35:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.44 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726583703; cv=none; b=mUnCnKui5mAOD2R/dqJCsUtfF1rB4a+1PTDj7Q3dxQdqpZ0vUFAdNCtG729xkb8TpGFtFEIS2hbA5w/njYf5qAqsaYVG7VMzl+g+cjWgOrX/3a7LNgZVQVGVMgnurJGyBnDEmhpIN65pnRX5fJTl9EVppcnltUgwpc9rEgU02c4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726583703; c=relaxed/simple; bh=UEzTx9D/4aIem3SYwDJSGjr2mKzWLG8B0R4I+L4ZtQs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=B4ws+tLzkRUJ9xcA7B+TxZbIeczYGv+mXiUUDZnqARdYA33Nt5zp02HV7cIuztWyLQGTNw0tH5x/vK6PTTb7Nm3FZmBTsrZAhSPEomEhaC5TP4xwsA55rq3kwPyhG7up/G/xE6bHvWNYDssMDD1Nu0UZ1YokvQ5Gn6omoCmce2o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=lowJ0vTG; arc=none smtp.client-ip=209.85.219.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="lowJ0vTG" Received: by mail-qv1-f44.google.com with SMTP id 6a1803df08f44-6c579748bf4so44300066d6.1; Tue, 17 Sep 2024 07:35:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1726583701; x=1727188501; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=cLQI6kZ6TnjcLlPN0cihMK1F/E38o+9g1wKleRuuDyM=; b=lowJ0vTGORomv5QklVJK5iEMh4WwT6cttc3itDOnaXNGxM2YXvsCjPwHGsg3+NZs8A 2rYOoxDUtfGucLkNA1kdb4OwsSXIJcs+da5nXqhLnH2A2GQp7XB7EqNAt8e0Y4Rhu0XM d6vprPfqiBIK77dOflufP2F2pAfbmXm0UYnprU1VEIwlelrB+sNndQUmKsJaMGnTYsLM PzIrSBfTDO+scRkWacNrPebC8wUjy8fQUKSSaHwmGQv640NOddN1r87Wfs+RI4heiiv0 Cej/jKUae4smSeB0QGO8rqMISO/sIklewhipO8l7E7MBLA6VwnrQhiksvUrI+5wc5S96 T2fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726583701; x=1727188501; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=cLQI6kZ6TnjcLlPN0cihMK1F/E38o+9g1wKleRuuDyM=; b=tXRf9eSHXLSRQuBEZKmcA3O+EHB9Vzrl/H9H32JPMswNxZBEKnn/FBMO89DNXj3lBb M4VQKz9fKCEKXI8Np0IoUVOYlZMdPC/6tklWN+hXujq4Wa0jNDAuvUqIy3UTKI+xc/c/ 4JwciY4qe4zTT5XdY5e2H9GgTA9sbqMmUjEc0MgvQSQBTdgYsmHjJICPC1LBE1Gv5dKK KMBH/x3BKmj1PaaftZqmOMKYqLowuNKdsXAm3r+2jgn70MkfZhI0gWdnp1wf9fShxN9f HLbhc3PUiivGCA6Rkzpbc6jmrRVLNuPY3d2WbyiKmC8GW4WImTonHg3THRZg3iCIJedu Z4wQ== X-Forwarded-Encrypted: i=1; AJvYcCUtzRm1133qO6AXdRAkOZ8xl9qtidTr4Anu8scCpzNHdlDD3INGv6PBDX2drs75d/vPwwDx@vger.kernel.org, AJvYcCXVtPgoKOlLbbSQ+uSF7fsfPLNmp5N6jblJZRmQCnlhZiqX/oiPzwLeXJ2SW95KDsWkscbd@vger.kernel.org X-Gm-Message-State: AOJu0Yx8mUmiimeXP+FKdqQp9m9ylynYbN7h/fY/AA3+5UFTDVvvupay oMP2PJjWwzGWEM3+eaTUOceQCtsxUD0mri3hEhlWjDpeE/apeOUV X-Google-Smtp-Source: AGHT+IHxAlrf3264M1eG0sU+WYQOpz1SeCGp6P57WflWDJF78YqG3aNM3gQ3OugWPUyeVTVmITSVWQ== X-Received: by 2002:a05:6214:440b:b0:6c3:62bc:5de3 with SMTP id 6a1803df08f44-6c57e240938mr223177656d6.53.1726583700929; Tue, 17 Sep 2024 07:35:00 -0700 (PDT) Received: from fauth1-smtp.messagingengine.com (fauth1-smtp.messagingengine.com. [103.168.172.200]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6c58c645d27sm35018466d6.73.2024.09.17.07.35.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Sep 2024 07:35:00 -0700 (PDT) Received: from phl-compute-03.internal (phl-compute-03.phl.internal [10.202.2.43]) by mailfauth.phl.internal (Postfix) with ESMTP id F227C1200070; Tue, 17 Sep 2024 10:34:59 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-03.internal (MEProxy); Tue, 17 Sep 2024 10:34:59 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrudekjedgjeelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkofgjfhgggfestdekredtredttden ucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrd gtohhmqeenucggtffrrghtthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveff ieeujefhueeigfegueehgeeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhi thihqdeiledvgeehtdeigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmh grihhlrdgtohhmsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepvdehpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehlihhnuhigqdhkvghrnhgvlhesvhhgvghrrd hkvghrnhgvlhdrohhrghdprhgtphhtthhopehrtghusehvghgvrhdrkhgvrhhnvghlrdho rhhgpdhrtghpthhtoheplhhinhhugidqmhhmsehkvhgrtghkrdhorhhgpdhrtghpthhtoh eplhhkmhhmsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhrtghpthhtohepphgruhhlmhgt kheskhgvrhhnvghlrdhorhhgpdhrtghpthhtohepfhhrvgguvghrihgtsehkvghrnhgvlh drohhrghdprhgtphhtthhopehnvggvrhgrjhdruhhprgguhhihrgihsehkvghrnhgvlhdr ohhrghdprhgtphhtthhopehjohgvlhesjhhovghlfhgvrhhnrghnuggvshdrohhrghdprh gtphhtthhopehjohhshhesjhhoshhhthhrihhplhgvthhtrdhorhhg X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 17 Sep 2024 10:34:59 -0400 (EDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org, linux-mm@kvack.org, lkmm@vger.kernel.org Cc: "Paul E. McKenney" , Frederic Weisbecker , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Boqun Feng , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Mark Rutland , Thomas Gleixner , Kent Overstreet , Linus Torvalds , Vlastimil Babka , maged.michael@gmail.com Subject: [RFC PATCH 4/4] WIP: hazptr: Add hazptr test sample Date: Tue, 17 Sep 2024 07:34:02 -0700 Message-ID: <20240917143402.930114-5-boqun.feng@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240917143402.930114-1-boqun.feng@gmail.com> References: <20240917143402.930114-1-boqun.feng@gmail.com> Precedence: bulk X-Mailing-List: rcu@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Sample code for hazptr. This should go away or get more polished when hazptr support is added into rcutorture. Signed-off-by: Boqun Feng --- samples/Kconfig | 6 +++ samples/Makefile | 1 + samples/hazptr/hazptr_test.c | 87 ++++++++++++++++++++++++++++++++++++ 3 files changed, 94 insertions(+) create mode 100644 samples/hazptr/hazptr_test.c diff --git a/samples/Kconfig b/samples/Kconfig index b288d9991d27..9b42cde35dca 100644 --- a/samples/Kconfig +++ b/samples/Kconfig @@ -293,6 +293,12 @@ config SAMPLE_CGROUP source "samples/rust/Kconfig" +config SAMPLE_HAZPTR + bool "Build hazptr sample code" + help + Build samples that shows hazard pointer usage. Currently only + builtin usage is supported. + endif # SAMPLES config HAVE_SAMPLE_FTRACE_DIRECT diff --git a/samples/Makefile b/samples/Makefile index b85fa64390c5..0be21edc8a30 100644 --- a/samples/Makefile +++ b/samples/Makefile @@ -39,3 +39,4 @@ obj-$(CONFIG_SAMPLE_KMEMLEAK) += kmemleak/ obj-$(CONFIG_SAMPLE_CORESIGHT_SYSCFG) += coresight/ obj-$(CONFIG_SAMPLE_FPROBE) += fprobe/ obj-$(CONFIG_SAMPLES_RUST) += rust/ +obj-$(CONFIG_SAMPLE_HAZPTR) += hazptr/ diff --git a/samples/hazptr/hazptr_test.c b/samples/hazptr/hazptr_test.c new file mode 100644 index 000000000000..3cf0cdc8a83a --- /dev/null +++ b/samples/hazptr/hazptr_test.c @@ -0,0 +1,87 @@ +#include +#include +#include + +struct foo { + int i; + struct callback_head head; +}; + +static void simple_func(struct callback_head *head) +{ + struct foo *ptr = container_of(head, struct foo, head); + + printk("callback called %px, i is %d\n", ptr, ptr->i); + kfree(ptr); +} + +static void simple(void) +{ + struct hazptr_context ctx; + struct foo *dummy, *tmp, *other; + hazptr_t *hptr; + hazptr_t *hptr2; + + dummy = kzalloc(sizeof(*dummy), GFP_KERNEL); + dummy->i = 42; + + other = kzalloc(sizeof(*dummy), GFP_KERNEL); + other->i = 43; + + if (!dummy || !other) { + printk("allocation failed, skip test\n"); + return; + } + + init_hazptr_context(&ctx); + hptr = hazptr_alloc(&ctx); + BUG_ON(!hptr); + + // Get a second hptr. + hptr2 = hazptr_alloc(&ctx); + BUG_ON(!hptr2); + + // No one is modifying 'dummy', protection must succeed. + BUG_ON(!hazptr_tryprotect(hptr, dummy, head)); + + // Simulate changing a global pointer. + tmp = dummy; + WRITE_ONCE(dummy, other); + + // Callback will run after no active readers. + printk("callback added, %px\n", tmp); + + call_hazptr(&tmp->head, simple_func); + + // No one is modifying 'dummy', protection must succeed. + tmp = hazptr_protect(hptr2, dummy, head); + + printk("reader2 got %px, i is %d\n", tmp, tmp->i); + + // The above callback should run after this. + hazptr_clear(hptr); + printk("first reader is out\n"); + + for (int i = 0; i < 10; i++) + schedule(); // yield a few times. + + // Simulate freeing a global pointer. + tmp = dummy; + WRITE_ONCE(dummy, NULL); + printk("callback added, %px\n", tmp); + call_hazptr(&tmp->head, simple_func); + + cleanup_hazptr_context(&ctx); + printk("no reader here\n"); + + for (int i = 0; i < 10; i++) + schedule(); // yield a few times. +} + +static int hazptr_test(void) +{ + simple(); + printk("test ends\n"); + return 0; +} +module_init(hazptr_test);