From patchwork Fri Feb 10 21:15:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: andrey.konovalov@linux.dev X-Patchwork-Id: 13136457 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F605C636D7 for ; Fri, 10 Feb 2023 21:17:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 095F86B018F; Fri, 10 Feb 2023 16:17:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F36926B0190; Fri, 10 Feb 2023 16:17:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DB1E96B0191; Fri, 10 Feb 2023 16:17:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B8B886B018F for ; Fri, 10 Feb 2023 16:17:18 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 7F0C980592 for ; Fri, 10 Feb 2023 21:17:18 +0000 (UTC) X-FDA: 80452642956.18.BDA82AB Received: from out-234.mta1.migadu.com (out-234.mta1.migadu.com [95.215.58.234]) by imf21.hostedemail.com (Postfix) with ESMTP id C66A21C0004 for ; Fri, 10 Feb 2023 21:17:16 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=dPUdX3td; spf=pass (imf21.hostedemail.com: domain of andrey.konovalov@linux.dev designates 95.215.58.234 as permitted sender) smtp.mailfrom=andrey.konovalov@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676063837; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=k1dYguidu/ftb/l9KscZ+n0HL8ER05ZmODhupOX0dXU=; b=cNNSw/2FQlj+8Rt2kkaf9wW9Cu0kyjVPmICgqLqnUhkBokDqNTcSpQ4YENGD3tRltZP04N KxFhO+A9ObNEGq5mZ9+AgB2kEa0D84WS4x5PAOerELXJ0XjCbqmc2y7ZGr1RtLX+bVfJ0Y BNV0GMI4GGyuiuIDyrX3dUiQ3ikiV5g= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=dPUdX3td; spf=pass (imf21.hostedemail.com: domain of andrey.konovalov@linux.dev designates 95.215.58.234 as permitted sender) smtp.mailfrom=andrey.konovalov@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676063837; a=rsa-sha256; cv=none; b=NyC42xOMZ0NjhZVIL5pkWglnK97CTA9LmhmZfqfBXsjby7u1fx0MQgszxY2V/f4mWDx/lt CVIppVdfAHxnA1Cc+wH/qhC7ViHyPMzGyL/VKuvKSeFENdW4YRqas9cI9i4YXG7ZuflDqt OAy88Jy2HlyjGEqMnsXnRgKe+Qih6ew= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1676063835; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=k1dYguidu/ftb/l9KscZ+n0HL8ER05ZmODhupOX0dXU=; b=dPUdX3td2hwjMJwaydwAO7P6cUwMqS7s04CWXZgZq4rTtFDVS2WfxxJlWzHAOeK9hg6Lnp mCqInVHJA2e4igp9gnVXcPY1rn3bEgd5mZ2Atb1ioMZCMAInTnVEmakuMT0E/nYF/mUMJS xuCob2cUSwIvHkTDy32/Oyfxzxb+b/M= From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko Cc: Andrey Konovalov , Vlastimil Babka , kasan-dev@googlegroups.com, Evgenii Stepanov , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH v2 09/18] lib/stackdepot: rename slab to pool Date: Fri, 10 Feb 2023 22:15:57 +0100 Message-Id: <923c507edb350c3b6ef85860f36be489dfc0ad21.1676063693.git.andreyknvl@google.com> In-Reply-To: References: MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: C66A21C0004 X-Stat-Signature: 3jtfkw5a4r6pwxapqyp6unc67y1ahn6t X-HE-Tag: 1676063836-670218 X-HE-Meta: U2FsdGVkX1+5WCprh8F6XOlHXoiEvDAeFfnBQ11RRaHA2cOvZk9n/k49zpOR3ylx27hYH0tJcu/ZGyfAg51rUCbyGrIgusjrPz/OlcpL4inWHKejq2DEiYnPJvAhoGV4m/q+EfYjo0tWpqIJmNnAVGfqJpz+nd0duq//bBy4tk2e9q/WxdREuiVENTTuTtNiR/cTJxR2Ca51tVsYOo/gs8aZajW478QLgnT0ipgOutop8eYGT62XnypwE4AZmb0BxeIFhkYae20SU9jKZRGwPO545z8Lx/DxQqqSzpCJ7ch2bdvQMyRSmg7ILxidcQ0Sz8UyCnKlkNijEeDejRmgqL0/PxxuRQcK2OLONUykF1cYO+pwFUoNkLo7FLuyzzfF26MqH+raChHUC9iZFpG32l9k91mMqgWxM6hG/fFBm2w7Y3O29VGWkuZN6xG1uN6Vj8cM+j1il+OafmPWxsgzoNntLvQY4gC6Tcz1G7CtwDML1ezL331vGkVjfHjA/MHZDy/uwBJPkx1TInNwCalCY3yByH+kGXEvrMpKn7XDqIv3jrqKhmd8NRrNdtuRBt7ims9k7LwN4SCculZFzcYXsdFfWJ8EpjQs9h8IGXqsoQZoUrP3g0wfeyczIDSlg924SGEF4I3KR6fWNEcdeFiEYmcoGLXjmuiXqKV4msel+wEntYwvIQIlOaFj2siEybXBP1+Jrhe/vR7D53bcN5OXHEv1uusm02UeszCcZJIOo/CsJ0ENujMR7qi3crk9rb7WVCgdbovrAsuFMFfmJO+imh0/nmgEmyZLHwUTEXE77GjLmsyd6sLo5zrtbk/FWpgCsofB3pKI47Px2hYIao36Mtyqy8FmXg8HTKWUyMtri7yM6Ti6hM/H6NzE+V9Nb8LR4KuuxSR3Gu5R/JfG3M5SoL5YDhchC73mfDueOFF7aeZLm0tWq09xBWkyfXpX4E/qONhaWx+yKB8TpphCL+8 Y7bLBZ33 7EgbXi6AcwQLUCnVm9WryTTV4U+AOIGg45N/dFxBcXQR2FxWIjpgUuU0Dh1f6Rg2YNvyDOw7FfdE6ijHX49GhR88Fh9say5zWqnvs X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Use "pool" instead of "slab" for naming memory regions stack depot uses to store stack traces. Using "slab" is confusing, as stack depot pools have nothing to do with the slab allocator. Also give better names to pool-related global variables: change "depot_" prefix to "pool_" to point out that these variables are related to stack depot pools. Also rename the slabindex (poolindex) field in handle_parts to pool_index to align its name with the pool_index global variable. No functional changes. Signed-off-by: Andrey Konovalov Acked-by: Vlastimil Babka Reviewed-by: Alexander Potapenko --- Changes v1->v2: - Use "pool" instead of "slab" for memory regions that store stack traces. --- lib/stackdepot.c | 106 +++++++++++++++++++++++------------------------ 1 file changed, 53 insertions(+), 53 deletions(-) diff --git a/lib/stackdepot.c b/lib/stackdepot.c index d1ab53197353..522e36cf449f 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -39,7 +39,7 @@ #define DEPOT_STACK_BITS (sizeof(depot_stack_handle_t) * 8) #define STACK_ALLOC_NULL_PROTECTION_BITS 1 -#define STACK_ALLOC_ORDER 2 /* 'Slab' size order for stack depot, 4 pages */ +#define STACK_ALLOC_ORDER 2 /* Pool size order for stack depot, 4 pages */ #define STACK_ALLOC_SIZE (1LL << (PAGE_SHIFT + STACK_ALLOC_ORDER)) #define STACK_ALLOC_ALIGN 4 #define STACK_ALLOC_OFFSET_BITS (STACK_ALLOC_ORDER + PAGE_SHIFT - \ @@ -47,16 +47,16 @@ #define STACK_ALLOC_INDEX_BITS (DEPOT_STACK_BITS - \ STACK_ALLOC_NULL_PROTECTION_BITS - \ STACK_ALLOC_OFFSET_BITS - STACK_DEPOT_EXTRA_BITS) -#define STACK_ALLOC_SLABS_CAP 8192 -#define STACK_ALLOC_MAX_SLABS \ - (((1LL << (STACK_ALLOC_INDEX_BITS)) < STACK_ALLOC_SLABS_CAP) ? \ - (1LL << (STACK_ALLOC_INDEX_BITS)) : STACK_ALLOC_SLABS_CAP) +#define STACK_ALLOC_POOLS_CAP 8192 +#define STACK_ALLOC_MAX_POOLS \ + (((1LL << (STACK_ALLOC_INDEX_BITS)) < STACK_ALLOC_POOLS_CAP) ? \ + (1LL << (STACK_ALLOC_INDEX_BITS)) : STACK_ALLOC_POOLS_CAP) /* The compact structure to store the reference to stacks. */ union handle_parts { depot_stack_handle_t handle; struct { - u32 slabindex : STACK_ALLOC_INDEX_BITS; + u32 pool_index : STACK_ALLOC_INDEX_BITS; u32 offset : STACK_ALLOC_OFFSET_BITS; u32 valid : STACK_ALLOC_NULL_PROTECTION_BITS; u32 extra : STACK_DEPOT_EXTRA_BITS; @@ -91,15 +91,15 @@ static unsigned int stack_bucket_number_order; static unsigned int stack_hash_mask; /* Array of memory regions that store stack traces. */ -static void *stack_slabs[STACK_ALLOC_MAX_SLABS]; -/* Currently used slab in stack_slabs. */ -static int depot_index; -/* Offset to the unused space in the currently used slab. */ -static size_t depot_offset; +static void *stack_pools[STACK_ALLOC_MAX_POOLS]; +/* Currently used pool in stack_pools. */ +static int pool_index; +/* Offset to the unused space in the currently used pool. */ +static size_t pool_offset; /* Lock that protects the variables above. */ -static DEFINE_RAW_SPINLOCK(depot_lock); -/* Whether the next slab is initialized. */ -static int next_slab_inited; +static DEFINE_RAW_SPINLOCK(pool_lock); +/* Whether the next pool is initialized. */ +static int next_pool_inited; static int __init disable_stack_depot(char *str) { @@ -220,30 +220,30 @@ int stack_depot_init(void) } EXPORT_SYMBOL_GPL(stack_depot_init); -static bool init_stack_slab(void **prealloc) +static bool init_stack_pool(void **prealloc) { if (!*prealloc) return false; /* * This smp_load_acquire() pairs with smp_store_release() to - * |next_slab_inited| below and in depot_alloc_stack(). + * |next_pool_inited| below and in depot_alloc_stack(). */ - if (smp_load_acquire(&next_slab_inited)) + if (smp_load_acquire(&next_pool_inited)) return true; - if (stack_slabs[depot_index] == NULL) { - stack_slabs[depot_index] = *prealloc; + if (stack_pools[pool_index] == NULL) { + stack_pools[pool_index] = *prealloc; *prealloc = NULL; } else { - /* If this is the last depot slab, do not touch the next one. */ - if (depot_index + 1 < STACK_ALLOC_MAX_SLABS) { - stack_slabs[depot_index + 1] = *prealloc; + /* If this is the last depot pool, do not touch the next one. */ + if (pool_index + 1 < STACK_ALLOC_MAX_POOLS) { + stack_pools[pool_index + 1] = *prealloc; *prealloc = NULL; } /* * This smp_store_release pairs with smp_load_acquire() from - * |next_slab_inited| above and in stack_depot_save(). + * |next_pool_inited| above and in stack_depot_save(). */ - smp_store_release(&next_slab_inited, 1); + smp_store_release(&next_pool_inited, 1); } return true; } @@ -257,35 +257,35 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc) required_size = ALIGN(required_size, 1 << STACK_ALLOC_ALIGN); - if (unlikely(depot_offset + required_size > STACK_ALLOC_SIZE)) { - if (unlikely(depot_index + 1 >= STACK_ALLOC_MAX_SLABS)) { + if (unlikely(pool_offset + required_size > STACK_ALLOC_SIZE)) { + if (unlikely(pool_index + 1 >= STACK_ALLOC_MAX_POOLS)) { WARN_ONCE(1, "Stack depot reached limit capacity"); return NULL; } - depot_index++; - depot_offset = 0; + pool_index++; + pool_offset = 0; /* * smp_store_release() here pairs with smp_load_acquire() from - * |next_slab_inited| in stack_depot_save() and - * init_stack_slab(). + * |next_pool_inited| in stack_depot_save() and + * init_stack_pool(). */ - if (depot_index + 1 < STACK_ALLOC_MAX_SLABS) - smp_store_release(&next_slab_inited, 0); + if (pool_index + 1 < STACK_ALLOC_MAX_POOLS) + smp_store_release(&next_pool_inited, 0); } - init_stack_slab(prealloc); - if (stack_slabs[depot_index] == NULL) + init_stack_pool(prealloc); + if (stack_pools[pool_index] == NULL) return NULL; - stack = stack_slabs[depot_index] + depot_offset; + stack = stack_pools[pool_index] + pool_offset; stack->hash = hash; stack->size = size; - stack->handle.slabindex = depot_index; - stack->handle.offset = depot_offset >> STACK_ALLOC_ALIGN; + stack->handle.pool_index = pool_index; + stack->handle.offset = pool_offset >> STACK_ALLOC_ALIGN; stack->handle.valid = 1; stack->handle.extra = 0; memcpy(stack->entries, entries, flex_array_size(stack, entries, size)); - depot_offset += required_size; + pool_offset += required_size; return stack; } @@ -336,10 +336,10 @@ static inline struct stack_record *find_stack(struct stack_record *bucket, * @nr_entries: Size of the storage array * @extra_bits: Flags to store in unused bits of depot_stack_handle_t * @alloc_flags: Allocation gfp flags - * @can_alloc: Allocate stack slabs (increased chance of failure if false) + * @can_alloc: Allocate stack pools (increased chance of failure if false) * * Saves a stack trace from @entries array of size @nr_entries. If @can_alloc is - * %true, is allowed to replenish the stack slab pool in case no space is left + * %true, is allowed to replenish the stack pool in case no space is left * (allocates using GFP flags of @alloc_flags). If @can_alloc is %false, avoids * any allocations and will fail if no space is left to store the stack trace. * @@ -396,14 +396,14 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries, goto exit; /* - * Check if the current or the next stack slab need to be initialized. + * Check if the current or the next stack pool need to be initialized. * If so, allocate the memory - we won't be able to do that under the * lock. * * The smp_load_acquire() here pairs with smp_store_release() to - * |next_slab_inited| in depot_alloc_stack() and init_stack_slab(). + * |next_pool_inited| in depot_alloc_stack() and init_stack_pool(). */ - if (unlikely(can_alloc && !smp_load_acquire(&next_slab_inited))) { + if (unlikely(can_alloc && !smp_load_acquire(&next_pool_inited))) { /* * Zero out zone modifiers, as we don't have specific zone * requirements. Keep the flags related to allocation in atomic @@ -417,7 +417,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries, prealloc = page_address(page); } - raw_spin_lock_irqsave(&depot_lock, flags); + raw_spin_lock_irqsave(&pool_lock, flags); found = find_stack(*bucket, entries, nr_entries, hash); if (!found) { @@ -437,10 +437,10 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries, * We didn't need to store this stack trace, but let's keep * the preallocated memory for the future. */ - WARN_ON(!init_stack_slab(&prealloc)); + WARN_ON(!init_stack_pool(&prealloc)); } - raw_spin_unlock_irqrestore(&depot_lock, flags); + raw_spin_unlock_irqrestore(&pool_lock, flags); exit: if (prealloc) { /* Nobody used this memory, ok to free it. */ @@ -488,7 +488,7 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle, unsigned long **entries) { union handle_parts parts = { .handle = handle }; - void *slab; + void *pool; size_t offset = parts.offset << STACK_ALLOC_ALIGN; struct stack_record *stack; @@ -496,15 +496,15 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle, if (!handle) return 0; - if (parts.slabindex > depot_index) { - WARN(1, "slab index %d out of bounds (%d) for stack id %08x\n", - parts.slabindex, depot_index, handle); + if (parts.pool_index > pool_index) { + WARN(1, "pool index %d out of bounds (%d) for stack id %08x\n", + parts.pool_index, pool_index, handle); return 0; } - slab = stack_slabs[parts.slabindex]; - if (!slab) + pool = stack_pools[parts.pool_index]; + if (!pool) return 0; - stack = slab + offset; + stack = pool + offset; *entries = stack->entries; return stack->size;