diff mbox series

[v2,06/19] lib/stackdepot: fix and clean-up atomic annotations

Message ID e78360a883edac7bc3c6a351c99a6019beacf264.1694625260.git.andreyknvl@google.com (mailing list archive)
State New
Headers show
Series stackdepot: allow evicting stack traces | expand

Commit Message

andrey.konovalov@linux.dev Sept. 13, 2023, 5:14 p.m. UTC
From: Andrey Konovalov <andreyknvl@google.com>

Simplify comments accompanying the use of atomic accesses in the
stack depot code.

Also drop smp_load_acquire from next_pool_required in depot_init_pool,
as both depot_init_pool and the all smp_store_release's to this variable
are executed under the stack depot lock.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>

---

This patch is not strictly required, as the atomic accesses are fully
removed in one of the latter patches. However, I decided to keep the
patch just in case we end up needing these atomics in the following
iterations of this series.

Changes v1->v2:
- Minor comment fix as suggested by Marco.
- Drop READ_ONCE marking for next_pool_required.
---
 lib/stackdepot.c | 27 ++++++++++++---------------
 1 file changed, 12 insertions(+), 15 deletions(-)

Comments

Alexander Potapenko Oct. 6, 2023, 4:14 p.m. UTC | #1
On Wed, Sep 13, 2023 at 7:15 PM <andrey.konovalov@linux.dev> wrote:
>
> From: Andrey Konovalov <andreyknvl@google.com>
>
> Simplify comments accompanying the use of atomic accesses in the
> stack depot code.
>
> Also drop smp_load_acquire from next_pool_required in depot_init_pool,
> as both depot_init_pool and the all smp_store_release's to this variable
> are executed under the stack depot lock.
>
> Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Alexander Potapenko <glider@google.com>

(but see below)


>                  * Move on to the next pool.
>                  * WRITE_ONCE pairs with potential concurrent read in
> -                * stack_depot_fetch().
> +                * stack_depot_fetch.

Why are you removing the parentheses here? kernel-doc uses them to
tell functions from non-functions, and having them in non-doc comments
sounds consistent.
Alexander Potapenko Oct. 6, 2023, 5:21 p.m. UTC | #2
On Fri, Oct 6, 2023 at 6:14 PM Alexander Potapenko <glider@google.com> wrote:
>
> On Wed, Sep 13, 2023 at 7:15 PM <andrey.konovalov@linux.dev> wrote:
> >
> > From: Andrey Konovalov <andreyknvl@google.com>
> >
> > Simplify comments accompanying the use of atomic accesses in the
> > stack depot code.
> >
> > Also drop smp_load_acquire from next_pool_required in depot_init_pool,
> > as both depot_init_pool and the all smp_store_release's to this variable
> > are executed under the stack depot lock.

Maybe add this to the comment before "if (!next_pool_required)" ?
Andrey Konovalov Oct. 23, 2023, 4:15 p.m. UTC | #3
On Fri, Oct 6, 2023 at 7:22 PM Alexander Potapenko <glider@google.com> wrote:
>
> On Fri, Oct 6, 2023 at 6:14 PM Alexander Potapenko <glider@google.com> wrote:
> >
> > On Wed, Sep 13, 2023 at 7:15 PM <andrey.konovalov@linux.dev> wrote:
> > >
> > > From: Andrey Konovalov <andreyknvl@google.com>
> > >
> > > Simplify comments accompanying the use of atomic accesses in the
> > > stack depot code.
> > >
> > > Also drop smp_load_acquire from next_pool_required in depot_init_pool,
> > > as both depot_init_pool and the all smp_store_release's to this variable
> > > are executed under the stack depot lock.
>
> Maybe add this to the comment before "if (!next_pool_required)" ?

Will do in v3.

Re removed parentheses: will restore them in v3.

Thanks!
diff mbox series

Patch

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 128ece21afe9..babd453261f0 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -225,10 +225,8 @@  static void depot_init_pool(void **prealloc)
 	/*
 	 * If the next pool is already initialized or the maximum number of
 	 * pools is reached, do not use the preallocated memory.
-	 * smp_load_acquire() here pairs with smp_store_release() below and
-	 * in depot_alloc_stack().
 	 */
-	if (!smp_load_acquire(&next_pool_required))
+	if (!next_pool_required)
 		return;
 
 	/* Check if the current pool is not yet allocated. */
@@ -249,8 +247,8 @@  static void depot_init_pool(void **prealloc)
 		 * At this point, either the next pool is initialized or the
 		 * maximum number of pools is reached. In either case, take
 		 * note that initializing another pool is not required.
-		 * This smp_store_release pairs with smp_load_acquire() above
-		 * and in stack_depot_save().
+		 * smp_store_release pairs with smp_load_acquire in
+		 * stack_depot_save.
 		 */
 		smp_store_release(&next_pool_required, 0);
 	}
@@ -274,15 +272,15 @@  depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
 		/*
 		 * Move on to the next pool.
 		 * WRITE_ONCE pairs with potential concurrent read in
-		 * stack_depot_fetch().
+		 * stack_depot_fetch.
 		 */
 		WRITE_ONCE(pool_index, pool_index + 1);
 		pool_offset = 0;
 		/*
 		 * If the maximum number of pools is not reached, take note
 		 * that the next pool needs to initialized.
-		 * smp_store_release() here pairs with smp_load_acquire() in
-		 * stack_depot_save() and depot_init_pool().
+		 * smp_store_release pairs with smp_load_acquire in
+		 * stack_depot_save.
 		 */
 		if (pool_index + 1 < DEPOT_MAX_POOLS)
 			smp_store_release(&next_pool_required, 1);
@@ -324,7 +322,7 @@  static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle)
 	union handle_parts parts = { .handle = handle };
 	/*
 	 * READ_ONCE pairs with potential concurrent write in
-	 * depot_alloc_stack().
+	 * depot_alloc_stack.
 	 */
 	int pool_index_cached = READ_ONCE(pool_index);
 	void *pool;
@@ -413,8 +411,7 @@  depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 
 	/*
 	 * Fast path: look the stack trace up without locking.
-	 * The smp_load_acquire() here pairs with smp_store_release() to
-	 * |bucket| below.
+	 * smp_load_acquire pairs with smp_store_release to |bucket| below.
 	 */
 	found = find_stack(smp_load_acquire(bucket), entries, nr_entries, hash);
 	if (found)
@@ -424,8 +421,8 @@  depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 	 * Check if another stack pool needs to be initialized. If so, allocate
 	 * the memory now - we won't be able to do that under the lock.
 	 *
-	 * The smp_load_acquire() here pairs with smp_store_release() to
-	 * |next_pool_inited| in depot_alloc_stack() and depot_init_pool().
+	 * smp_load_acquire pairs with smp_store_release in depot_alloc_stack
+	 * and depot_init_pool.
 	 */
 	if (unlikely(can_alloc && smp_load_acquire(&next_pool_required))) {
 		/*
@@ -451,8 +448,8 @@  depot_stack_handle_t __stack_depot_save(unsigned long *entries,
 		if (new) {
 			new->next = *bucket;
 			/*
-			 * This smp_store_release() pairs with
-			 * smp_load_acquire() from |bucket| above.
+			 * smp_store_release pairs with smp_load_acquire
+			 * from |bucket| above.
 			 */
 			smp_store_release(bucket, new);
 			found = new;