From patchwork Fri Jan 24 16:48:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Brodsky X-Patchwork-Id: 13949752 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 993A0C0218C for ; Fri, 24 Jan 2025 16:49:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2752328007F; Fri, 24 Jan 2025 11:49:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 224C7280079; Fri, 24 Jan 2025 11:49:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0ECCC28007F; Fri, 24 Jan 2025 11:49:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E5F36280079 for ; Fri, 24 Jan 2025 11:49:34 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 6A0CB1C7D69 for ; Fri, 24 Jan 2025 16:49:34 +0000 (UTC) X-FDA: 83042931468.25.EC602B0 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf03.hostedemail.com (Postfix) with ESMTP id 98CF020005 for ; Fri, 24 Jan 2025 16:49:32 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=none; spf=pass (imf03.hostedemail.com: domain of kevin.brodsky@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=kevin.brodsky@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737737373; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references; bh=/LGYjbbZS9hq+3I7bLnGQBhCy6ARBr88LSrwyGOD7gQ=; b=gqv2AHDU/4NeZzwnIdxYXt72SXkVuiDbNaqJw4KTo29vcbKqCL4KM70RRawZdzYtn23W9g 6oKsNhpt8WWx2WqeCfpXaRzVcs0cWwt8QXHZFgXbRNIXqrQgCHpq6AUtF7mNDWa/1J4gCf 16Ct1JHhfNHe3QkFVqglfcA1kQmQar0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737737373; a=rsa-sha256; cv=none; b=Jx6JRE6rRTirIWTuoeTy1oYnazdQ+AnGRH7amXSuVdgn4y5dIwG1daLTna0Z5hkrftsQ0C nKmTr8rsczW9CnZ+p2RgqpY8HYizNTyyfB8Jpy4EUundo/FbS9RiSmdNdu5RSMxCdYBfgL CRT6DA/DEYXmO2auRpCHeUTtQD3h6tE= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none; spf=pass (imf03.hostedemail.com: domain of kevin.brodsky@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=kevin.brodsky@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 86038497; Fri, 24 Jan 2025 08:49:59 -0800 (PST) Received: from e123572-lin.arm.com (e123572-lin.cambridge.arm.com [10.1.194.54]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 11E943F5A1; Fri, 24 Jan 2025 08:49:29 -0800 (PST) From: Kevin Brodsky To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Kevin Brodsky , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com> Subject: [PATCH v2] mm/slab: simplify SLAB_* flag handling Date: Fri, 24 Jan 2025 16:48:58 +0000 Message-ID: <20250124164858.756425-1-kevin.brodsky@arm.com> X-Mailer: git-send-email 2.47.0 MIME-Version: 1.0 X-Stat-Signature: b9korg66i4ihn6iw4pskjorpjhpjmr3y X-Rspamd-Queue-Id: 98CF020005 X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1737737372-686198 X-HE-Meta: U2FsdGVkX1+sqrWfuhdEn9CrgsHr349jLroFykzVFuSfiuVn4OV07BmAG9vhbFTcbcSAPhK9FDAvqZr0Oo0MikwXw5ayveGBwQopcHhPrYOPSLizez4Wa8mze/286eLuiTsy2z4kl58kB7rK5ssbzFFYflq4qp5SouFsznl7dWpMpTXErN9VICMFCTyEjtBijDXz8JtvIdHzzoT0BC6x1eRn0PE/XDbIKM5htJU6OOl6u6sESMC1nOUVhc7wjz3o1gw2DKsyouPPRG/er7EZTlfjkXuBUx+AfoO6dCe4olw5g7bmiMR4HSxPurNb4ws5/QoUx0Q/a0lVmd5LOgQnI/ZH/y6WjLtmkO6k5IPF3Gt9e0oPd7Xfj35D60+qXy1gOV5jeAVFEos6uygj97IfRm5jNgVlTi6ns4z58h5oDc1atOdpnG/0r6jZ4e1VwzCEBxhc8rwjtCIfgLqlBCCwS9oD61ge+tZ5BGoujXUKQYqLpkMc/aeK+HQ3qun+Z/u3BgQMaNmH1t/wpDa1JuPsZX3YHeLrR+khjEKGENNN3TqdiVW3LZCunLQIuBVSSBz/y4jfqUFAVYG9oN8cwXCzSoiq+9XWcQcBTPX7Jt7/oReUdrL9rRT+GiaD3KJl9PC+MAuSrbiLP8hZ2j3u8pEf73flxHD4WRqGel7SvQbexC51nGUQPKjvRg1zumiuC8y2l8nBj2HKLgXO80yW26BHfpI3n2pROzSIWv/Pw0WuqiEFc2E4goZFG4yTCcCydV3mSYn9MymMSPmwuQovxXEyPAvL+Z2cB0toTGfMXZFC23MfuDG4Xm29UQeI8sYBVa5RQuktY+Hr7G1ifRaCtn7IH3xDIVqaiPRoZzUhF4+aXdDnItDGS+QQf3C0/NviqbCoGRg8Clu7uljMHy8iI/xCQUsiFlhmte4SUJ85bp2lIUqTRy0yLLVlEwDfTF+gv3Lcoof66r3vu0SVZUT9wbo u4suCtIR 5c+QVjatAQ2qfHNwlebo8pdghMe6TP6Llx+lbSANCE8IsDmG1vbx5ozLfAdf5Jdvs6fyYfbTc+NgjpzecK4e9bD4MMc7wE5bXnKZcPxPbkBJhDxutcMPXq87XkkDp85IlLPk2werkWo2Hust10/CMjcGdYQ407AER31wkNShLFdMAduw4v2/ReT62BL4x5tDiFsz9Xp0t38SdqvuxuMOJiIffuyNPRy9m0oiOVIXMeR90ZadMhiAaf2yOr2K29P9qfoCuCoTFpriEFvEqxYLEV+2LRAYEvGHrxIjiFUVcb/8jupXp/COTaC1gC1tM+e67ryv6qmLqptQyql0gYEOWAIhVg/h84OWAKCLJEUVZWuKGEw1J/pxRhenXqBn0QE5q7s/cwmzNKSqp5DQ58TYIqgfvvggM2JM5D/M2R3JOkAMgX2+hgYUBxoQWlSePwNnXgRyvTaV4FRNekPbr+dqxOcy8/4YrPCadcZHt66KUtBAY/PM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: SLUB is the only remaining allocator. We can therefore get rid of the logic for allocator-specific flags: * Merge SLAB_CACHE_FLAGS into SLAB_CORE_FLAGS. * Remove CACHE_CREATE_MASK and instead mask out SLAB_DEBUG_FLAGS if !CONFIG_SLUB_DEBUG. SLAB_DEBUG_FLAGS is now defined unconditionally (no impact on existing code, which ignores it if !CONFIG_SLUB_DEBUG). * Define SLAB_FLAGS_PERMITTED in terms of SLAB_CORE_FLAGS and SLAB_DEBUG_FLAGS (no functional change). While at it also remove misleading comments that suggest that multiple allocators are available. Signed-off-by: Kevin Brodsky --- v1..v2: * Keep the SLAB_FLAGS_PERMITTED check and remove CACHE_CREATE_MASK instead. SLAB_DEBUG_FLAGS is now defined unconditionally and explicitly masked out. v1: https://lore.kernel.org/all/20250117113226.3484784-1-kevin.brodsky@arm.com/ Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Andrew Morton Cc: Vlastimil Babka Cc: Roman Gushchin Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slab.h | 32 +++++--------------------------- mm/slab_common.c | 11 ++--------- 2 files changed, 7 insertions(+), 36 deletions(-) base-commit: ab18b8fff124c9b76ea12692571ca822dcd92854 diff --git a/mm/slab.h b/mm/slab.h index e9fd9bf0bfa6..1a081f50f947 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -457,39 +457,17 @@ static inline bool is_kmalloc_normal(struct kmem_cache *s) return !(s->flags & (SLAB_CACHE_DMA|SLAB_ACCOUNT|SLAB_RECLAIM_ACCOUNT)); } -/* Legal flag mask for kmem_cache_create(), for various configurations */ #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \ SLAB_CACHE_DMA32 | SLAB_PANIC | \ - SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS ) + SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS | \ + SLAB_NOLEAKTRACE | SLAB_RECLAIM_ACCOUNT | \ + SLAB_TEMPORARY | SLAB_ACCOUNT | \ + SLAB_NO_USER_FLAGS | SLAB_KMALLOC | SLAB_NO_MERGE) -#ifdef CONFIG_SLUB_DEBUG #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \ SLAB_TRACE | SLAB_CONSISTENCY_CHECKS) -#else -#define SLAB_DEBUG_FLAGS (0) -#endif -#define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE | SLAB_RECLAIM_ACCOUNT | \ - SLAB_TEMPORARY | SLAB_ACCOUNT | \ - SLAB_NO_USER_FLAGS | SLAB_KMALLOC | SLAB_NO_MERGE) - -/* Common flags available with current configuration */ -#define CACHE_CREATE_MASK (SLAB_CORE_FLAGS | SLAB_DEBUG_FLAGS | SLAB_CACHE_FLAGS) - -/* Common flags permitted for kmem_cache_create */ -#define SLAB_FLAGS_PERMITTED (SLAB_CORE_FLAGS | \ - SLAB_RED_ZONE | \ - SLAB_POISON | \ - SLAB_STORE_USER | \ - SLAB_TRACE | \ - SLAB_CONSISTENCY_CHECKS | \ - SLAB_NOLEAKTRACE | \ - SLAB_RECLAIM_ACCOUNT | \ - SLAB_TEMPORARY | \ - SLAB_ACCOUNT | \ - SLAB_KMALLOC | \ - SLAB_NO_MERGE | \ - SLAB_NO_USER_FLAGS) +#define SLAB_FLAGS_PERMITTED (SLAB_CORE_FLAGS | SLAB_DEBUG_FLAGS) bool __kmem_cache_empty(struct kmem_cache *); int __kmem_cache_shutdown(struct kmem_cache *); diff --git a/mm/slab_common.c b/mm/slab_common.c index 69f2d19010de..50b9c6497171 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -298,6 +298,8 @@ struct kmem_cache *__kmem_cache_create_args(const char *name, static_branch_enable(&slub_debug_enabled); if (flags & SLAB_STORE_USER) stack_depot_init(); +#else + flags &= ~SLAB_DEBUG_FLAGS; #endif mutex_lock(&slab_mutex); @@ -307,20 +309,11 @@ struct kmem_cache *__kmem_cache_create_args(const char *name, goto out_unlock; } - /* Refuse requests with allocator specific flags */ if (flags & ~SLAB_FLAGS_PERMITTED) { err = -EINVAL; goto out_unlock; } - /* - * Some allocators will constraint the set of valid flags to a subset - * of all flags. We expect them to define CACHE_CREATE_MASK in this - * case, and we'll just provide them with a sanitized version of the - * passed flags. - */ - flags &= CACHE_CREATE_MASK; - /* Fail closed on bad usersize of useroffset values. */ if (!IS_ENABLED(CONFIG_HARDENED_USERCOPY) || WARN_ON(!args->usersize && args->useroffset) ||