From patchwork Fri Mar 17 10:43:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13178865 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 638D0C74A5B for ; Fri, 17 Mar 2023 10:43:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9F5776B007E; Fri, 17 Mar 2023 06:43:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9AB076B0082; Fri, 17 Mar 2023 06:43:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 781AB6B0081; Fri, 17 Mar 2023 06:43:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 659726B007E for ; Fri, 17 Mar 2023 06:43:29 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 2BE8F14182A for ; Fri, 17 Mar 2023 10:43:29 +0000 (UTC) X-FDA: 80578053738.23.0417700 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf19.hostedemail.com (Postfix) with ESMTP id 38E351A0009 for ; Fri, 17 Mar 2023 10:43:26 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=MZZjD9hV; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=T6CRAZ6R; spf=pass (imf19.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679049807; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=opZ9dJXQ/gPy1wKFz1HBnlxilBrCid46x5IMRJNZcJ0=; b=Kxki+z24IkZaxbOZCLzjfTn4u7e8BqYGXGC/Gc6Ju6quxLX/pOXV6C2Tha/6v6E6clcVC7 PE+mp2NDEVR8oQiEPtRMaEt2gFpTvVNcPHaCMg7Wqm9udtTR3Ac+PW/BM+w46YZ8XgJaj6 E0Cv8eNx4iz4zI4NQ7jFlxs/RwoprfQ= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=MZZjD9hV; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=T6CRAZ6R; spf=pass (imf19.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679049807; a=rsa-sha256; cv=none; b=w/m+VfKqbQMYuqjBiPQIuirvS2YnMhIu08ybAsLkLmGZ/RivHDpCOZcY7sORerlfRjMkX+ sRMFkIA3tGu3DsLY3I09n6Z7ZgQpnz1iDWUWIs+5wYROZVYkCI3laRizb+YZ8OWhWSDaZb gugvh+/sZ7KZ6h1OrxZCPw21eCRWKxg= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 833B11FE24; Fri, 17 Mar 2023 10:43:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1679049803; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=opZ9dJXQ/gPy1wKFz1HBnlxilBrCid46x5IMRJNZcJ0=; b=MZZjD9hVHM7gKzHwGK4akYEL8msOyHYMlyDt1C5GfocojC6f+NXkIb+al3LmVC+wD1dlw6 SOQ490rm0HAqVokXr+TPDIl3hoIBWf/9pfJWq+JLan9h7JX7I+7swrBGOo8d+1QnzOMcDw KAAFqj4QupURTMJrCxmCwiMCbNG6WD8= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1679049803; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=opZ9dJXQ/gPy1wKFz1HBnlxilBrCid46x5IMRJNZcJ0=; b=T6CRAZ6RCxmyHubQmeWmaZWKtEYx5DoL+HGigMmLs233TXhuIdl+b8XlcMYNTVTGiIpPxv fWFFC4pxQr+iy4CQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4B37E1346F; Fri, 17 Mar 2023 10:43:23 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id MJR6EUtEFGRgdwAAMHmgww (envelope-from ); Fri, 17 Mar 2023 10:43:23 +0000 From: Vlastimil Babka To: Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , Andrew Morton , linux-mm@kvack.org, rcu@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, linux-doc@vger.kernel.org, Vlastimil Babka , Lorenzo Stoakes Subject: [PATCH v2 4/6] mm/slab: remove CONFIG_SLOB code from slab common code Date: Fri, 17 Mar 2023 11:43:05 +0100 Message-Id: <20230317104307.29328-5-vbabka@suse.cz> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230317104307.29328-1-vbabka@suse.cz> References: <20230317104307.29328-1-vbabka@suse.cz> MIME-Version: 1.0 X-Rspamd-Queue-Id: 38E351A0009 X-Stat-Signature: f4w613rs8smsygxhwmmuqby3qi9kk86d X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1679049806-401185 X-HE-Meta: U2FsdGVkX18+iwGQK+YJ2TdiS/RAL6L7xM4lipSMI8B9k90vnGpD6P2BJEQs4D396zmLZDsc6KHgY6RQX1dm5HJxtULHY9zDXf5AJw8kHDZnwkOMgU4Hfa6KBoXLT9ooCdh+Vfgn05iu8VTIHylZdrI4uSnKZ7S2M6FxZwLZLFSLG1jk7dBYj3MPpZmZxSH8gQAc/4cNAWFi4lZ6EJIcCv8S5w5PXgq1YoI90NjLTQKLFQj3TTNi7/TreotXjiZzTJHuqLJglHDcsMqm4mnlqxM2UnwH/rPefDAQX4iECGGKlXbxSiVAwU5Cn+fj/sAZKje473ntii6zCAspSxOwuK57WuIH6uX9+RytVS43Dt98iGk+sj9frYumIiDqIJgctMC6Z4WGI2muFME+SnZOZXL7Wy53azpDR/n/U0AvhIL4tugUYi0XBvkwuM2i4WfHxIkkHbbRrUjF4yPe2EGZXoTcnZ0nk8+dSCKL6RTPI6ftTCZuvwfGxRrWIzlUErM2Runx/ENiTH0sCU5urYdshAuWbi0RE8x0QBn2M6TgUorv1zPH6yY/LVaxhEB7mbInXH4NcEE2zIYBVx4EmAvrf6Zsr8V/97UsfMc3KS5AHeSvwrkowQ6YUzlddH7ytGJVTpAQ9RVYZHGeibRrtBkxZFUTNvPwG5sOBRSEZ+Kk+w0LRNIlQb44UtInpiPk6FFlNqK1GveOdsdZm7rWhe3hrU5DmUXjEft3zo5L2Rw2DQxDIJIrxEoOxXKkTVOZ/K3mHHsfHP+60O9RQz2diCJQD6Rp0LFpxDomuuUic+YnGq18g6B2d/dABIDLd73Mnl1ki2Mwn9y1Q4it4XfkKfJ5T722mzZMvprfYHtPY+LbimfnY6OFNAvTGW1coyvKd9xkkQm89OsKSnrUZVbdhyS36grNxYOUBlXaTS2g3v4elA7zXtHAVUMqs7e0zp2rGYSA4GTZJuW0IOm92mycrTA iIIrL7eK T+LGSCAeNuTLh0cmmEKqwMpcAkXsLz6+XpETC7cEV107ceACAX5P/NkGYSTZkFsgrQpSRcpVAgcJX0LGVJfm8YHVSJ12ry5u0oNizTEQH1AtGttFP63tdvbJyH4YFkbkHmrCDmccL7SAmWTZOygXoRREgBM51p/P8OzhUpCzggd4GefzTPZrXdPPkSOQ1O7qgGrgrvhZ77iBjtrfeWB5oP1MZVvh1KamdNl73yp/qSXnsNXswtzTt83sAiOB/bCdQAlN/m7AH1RZq80/5EElKmCNf2XCqu/9gp1mWEGdUvj+QrtpLIkY38bzFftH99dTlVgtf2zEY6wPgidyVC5ahZ43Ty2V6fnklQ3O5 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: CONFIG_SLOB has been removed from Kconfig. Remove code and #ifdef's specific to SLOB in the slab headers and common code. Signed-off-by: Vlastimil Babka Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Acked-by: Lorenzo Stoakes Acked-by: Mike Rapoport (IBM) --- include/linux/slab.h | 39 ---------------------------- mm/slab.h | 61 -------------------------------------------- mm/slab_common.c | 2 -- 3 files changed, 102 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 45af70315a94..7f645a4c1298 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -298,19 +298,6 @@ static inline unsigned int arch_slab_minalign(void) #endif #endif -#ifdef CONFIG_SLOB -/* - * SLOB passes all requests larger than one page to the page allocator. - * No kmalloc array is necessary since objects of different sizes can - * be allocated from the same page. - */ -#define KMALLOC_SHIFT_HIGH PAGE_SHIFT -#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) -#ifndef KMALLOC_SHIFT_LOW -#define KMALLOC_SHIFT_LOW 3 -#endif -#endif - /* Maximum allocatable size */ #define KMALLOC_MAX_SIZE (1UL << KMALLOC_SHIFT_MAX) /* Maximum size for which we actually use a slab cache */ @@ -366,7 +353,6 @@ enum kmalloc_cache_type { NR_KMALLOC_TYPES }; -#ifndef CONFIG_SLOB extern struct kmem_cache * kmalloc_caches[NR_KMALLOC_TYPES][KMALLOC_SHIFT_HIGH + 1]; @@ -458,7 +444,6 @@ static __always_inline unsigned int __kmalloc_index(size_t size, } static_assert(PAGE_SHIFT <= 20); #define kmalloc_index(s) __kmalloc_index(s, true) -#endif /* !CONFIG_SLOB */ void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __alloc_size(1); @@ -487,10 +472,6 @@ void kmem_cache_free(struct kmem_cache *s, void *objp); void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p); int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p); -/* - * Caller must not use kfree_bulk() on memory not originally allocated - * by kmalloc(), because the SLOB allocator cannot handle this. - */ static __always_inline void kfree_bulk(size_t size, void **p) { kmem_cache_free_bulk(NULL, size, p); @@ -567,7 +548,6 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page_align * Try really hard to succeed the allocation but fail * eventually. */ -#ifndef CONFIG_SLOB static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags) { if (__builtin_constant_p(size) && size) { @@ -583,17 +563,7 @@ static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags) } return __kmalloc(size, flags); } -#else -static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags) -{ - if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large(size, flags); - - return __kmalloc(size, flags); -} -#endif -#ifndef CONFIG_SLOB static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) { if (__builtin_constant_p(size) && size) { @@ -609,15 +579,6 @@ static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t fla } return __kmalloc_node(size, flags, node); } -#else -static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) -{ - if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large_node(size, flags, node); - - return __kmalloc_node(size, flags, node); -} -#endif /** * kmalloc_array - allocate memory for an array. diff --git a/mm/slab.h b/mm/slab.h index 43966aa5fadf..399966b3ce52 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -51,14 +51,6 @@ struct slab { }; unsigned int __unused; -#elif defined(CONFIG_SLOB) - - struct list_head slab_list; - void *__unused_1; - void *freelist; /* first free block */ - long units; - unsigned int __unused_2; - #else #error "Unexpected slab allocator configured" #endif @@ -72,11 +64,7 @@ struct slab { #define SLAB_MATCH(pg, sl) \ static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl)) SLAB_MATCH(flags, __page_flags); -#ifndef CONFIG_SLOB SLAB_MATCH(compound_head, slab_cache); /* Ensure bit 0 is clear */ -#else -SLAB_MATCH(compound_head, slab_list); /* Ensure bit 0 is clear */ -#endif SLAB_MATCH(_refcount, __page_refcount); #ifdef CONFIG_MEMCG SLAB_MATCH(memcg_data, memcg_data); @@ -200,31 +188,6 @@ static inline size_t slab_size(const struct slab *slab) return PAGE_SIZE << slab_order(slab); } -#ifdef CONFIG_SLOB -/* - * Common fields provided in kmem_cache by all slab allocators - * This struct is either used directly by the allocator (SLOB) - * or the allocator must include definitions for all fields - * provided in kmem_cache_common in their definition of kmem_cache. - * - * Once we can do anonymous structs (C11 standard) we could put a - * anonymous struct definition in these allocators so that the - * separate allocations in the kmem_cache structure of SLAB and - * SLUB is no longer needed. - */ -struct kmem_cache { - unsigned int object_size;/* The original size of the object */ - unsigned int size; /* The aligned/padded/added on size */ - unsigned int align; /* Alignment as calculated */ - slab_flags_t flags; /* Active flags on the slab */ - const char *name; /* Slab name for sysfs */ - int refcount; /* Use counter */ - void (*ctor)(void *); /* Called on object slot creation */ - struct list_head list; /* List of all slab caches on the system */ -}; - -#endif /* CONFIG_SLOB */ - #ifdef CONFIG_SLAB #include #endif @@ -274,7 +237,6 @@ extern const struct kmalloc_info_struct { unsigned int size; } kmalloc_info[]; -#ifndef CONFIG_SLOB /* Kmalloc array related functions */ void setup_kmalloc_cache_index_table(void); void create_kmalloc_caches(slab_flags_t); @@ -286,7 +248,6 @@ void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node, size_t orig_size, unsigned long caller); void __kmem_cache_free(struct kmem_cache *s, void *x, unsigned long caller); -#endif gfp_t kmalloc_fix_flags(gfp_t flags); @@ -303,33 +264,16 @@ extern void create_boot_cache(struct kmem_cache *, const char *name, int slab_unmergeable(struct kmem_cache *s); struct kmem_cache *find_mergeable(unsigned size, unsigned align, slab_flags_t flags, const char *name, void (*ctor)(void *)); -#ifndef CONFIG_SLOB struct kmem_cache * __kmem_cache_alias(const char *name, unsigned int size, unsigned int align, slab_flags_t flags, void (*ctor)(void *)); slab_flags_t kmem_cache_flags(unsigned int object_size, slab_flags_t flags, const char *name); -#else -static inline struct kmem_cache * -__kmem_cache_alias(const char *name, unsigned int size, unsigned int align, - slab_flags_t flags, void (*ctor)(void *)) -{ return NULL; } - -static inline slab_flags_t kmem_cache_flags(unsigned int object_size, - slab_flags_t flags, const char *name) -{ - return flags; -} -#endif static inline bool is_kmalloc_cache(struct kmem_cache *s) { -#ifndef CONFIG_SLOB return (s->flags & SLAB_KMALLOC); -#else - return false; -#endif } /* Legal flag mask for kmem_cache_create(), for various configurations */ @@ -634,7 +578,6 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, } #endif /* CONFIG_MEMCG_KMEM */ -#ifndef CONFIG_SLOB static inline struct kmem_cache *virt_to_cache(const void *obj) { struct slab *slab; @@ -684,8 +627,6 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) void free_large_kmalloc(struct folio *folio, void *object); -#endif /* CONFIG_SLOB */ - size_t __ksize(const void *objp); static inline size_t slab_ksize(const struct kmem_cache *s) @@ -777,7 +718,6 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, memcg_slab_post_alloc_hook(s, objcg, flags, size, p); } -#ifndef CONFIG_SLOB /* * The slab lists for all objects. */ @@ -824,7 +764,6 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node) for (__node = 0; __node < nr_node_ids; __node++) \ if ((__n = get_node(__s, __node))) -#endif #if defined(CONFIG_SLAB) || defined(CONFIG_SLUB_DEBUG) void dump_unreclaimable_slab(void); diff --git a/mm/slab_common.c b/mm/slab_common.c index bf4e777cfe90..1522693295f5 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -625,7 +625,6 @@ void kmem_dump_obj(void *object) EXPORT_SYMBOL_GPL(kmem_dump_obj); #endif -#ifndef CONFIG_SLOB /* Create a cache during boot when no slab services are available yet */ void __init create_boot_cache(struct kmem_cache *s, const char *name, unsigned int size, slab_flags_t flags, @@ -1079,7 +1078,6 @@ void *kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, return ret; } EXPORT_SYMBOL(kmalloc_node_trace); -#endif /* !CONFIG_SLOB */ gfp_t kmalloc_fix_flags(gfp_t flags) {