From patchwork Mon Nov 20 18:34:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13461884 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22E85C197A0 for ; Mon, 20 Nov 2023 18:35:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 26FD36B03C4; Mon, 20 Nov 2023 13:34:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EA03C6B03F7; Mon, 20 Nov 2023 13:34:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BE4556B03AE; Mon, 20 Nov 2023 13:34:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id A11BC6B03C4 for ; Mon, 20 Nov 2023 13:34:47 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 83A641CB28F for ; Mon, 20 Nov 2023 18:34:47 +0000 (UTC) X-FDA: 81479183814.07.3E7E90A Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf27.hostedemail.com (Postfix) with ESMTP id 5531A40021 for ; Mon, 20 Nov 2023 18:34:45 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=OPvbGApq; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=hHfEJiIX; dmarc=none; spf=pass (imf27.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700505285; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=W55q0OyvkcE8yjYUFiHdbA7mtkQVxDvl8PeQMRev8+4=; b=nW6X/y+9XWNtlXDxrr2d0hrWl+uEy+NpnMvt6VcPuFa0MNpEjUOJNrPaqRmx5VAAH4eYZu wo5y0w0Qb8j2lZ+nFZz5AZa+CqMUfW5fudO0LLiZIVGCHxPR8xGPCLKBqat3IDjR+tW5CO CexW/NqoPXvwzkmVw3HcBV8PxGgZJUc= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=OPvbGApq; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=hHfEJiIX; dmarc=none; spf=pass (imf27.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700505285; a=rsa-sha256; cv=none; b=k7s1Q3znC7096YlGc0PlC2gGTfQU9pCPR1/vumIcCaW5sXOVKfT2UsYE2ArmofT77jSMuh HFHs0LNqtb9efROEf18E9jxIwJYTq8nf3MSd7cscpSmfNtwMFVpk+iCeg3zwIFlr86ZS6t tRBXnaGgBk9Ie1RYSn2D31sBIeP3Uoo= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 126E71F8AE; Mon, 20 Nov 2023 18:34:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1700505282; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=W55q0OyvkcE8yjYUFiHdbA7mtkQVxDvl8PeQMRev8+4=; b=OPvbGApqH6vL+H6a+7rZFgs5m0dBd4Z4cyOSEQehojTuC6sUdylqEYLYaHnRfQmhea9GBw +Dwd+sQ4HiXRdV1RzXx6CXaN57LP5JQU2B1MdW2kvW+NsLdrVVCWt7raqgWeG7jDCgi34g o7iH1JY0UQ9TavwcGJHW7XPx8u2bu8c= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1700505282; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=W55q0OyvkcE8yjYUFiHdbA7mtkQVxDvl8PeQMRev8+4=; b=hHfEJiIXn/vUPJO2vMxUyAJddv/lecOFFiY4FeAuaj8e0ISZ2pTEsfwJKBDkmyZtJYU0KW HvHoSEiQAsLCYRAg== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id D5D6C139E9; Mon, 20 Nov 2023 18:34:41 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 6AnBM8GmW2UUMgAAMHmgww (envelope-from ); Mon, 20 Nov 2023 18:34:41 +0000 From: Vlastimil Babka Date: Mon, 20 Nov 2023 19:34:22 +0100 Subject: [PATCH v2 11/21] mm/slab: move the rest of slub_def.h to mm/slab.h MIME-Version: 1.0 Message-Id: <20231120-slab-remove-slab-v2-11-9c9c70177183@suse.cz> References: <20231120-slab-remove-slab-v2-0-9c9c70177183@suse.cz> In-Reply-To: <20231120-slab-remove-slab-v2-0-9c9c70177183@suse.cz> To: David Rientjes , Christoph Lameter , Pekka Enberg , Joonsoo Kim Cc: Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Marco Elver , Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org, linux-hardening@vger.kernel.org, Vlastimil Babka X-Mailer: b4 0.12.4 X-Rspamd-Queue-Id: 5531A40021 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 6eb4qyya88qe4zpfbhcce4m7doasr5ih X-HE-Tag: 1700505285-851156 X-HE-Meta: U2FsdGVkX1+pq3n0RsYMcK/Ha8FLgaxvOeNhiMtTviGp3CdkzeITF7hr0vPMYmHJx2I/d/VeRv/jtMOf48xCvpm1I9acUBzlABzAkpITbhh4dzJeS1tl6PcOH+MNhWeTBmwXe/67hXK/CmeTe8wKevYoabW8j04zj94tFNR2buBiRTiioxkyBWe7R9IX0dIT7pp1/G/k0Bk1n7U1MqXKrMnslTAHnZCb20clPGLptDu6eyUHEWjIoSQVuYcF6e++kD0BCP+Zzqk5GXFVuaGwhqWfJEskKbocNQpYA6zuJwH6PjkawJYWLnTFVTQ8RBOdxmmkj342giqCIYq7fz1iPrS2GGCoxj8FP9LuIu4QospbholjwLqKMbM9z4ABhemE8W0FVBcOf5oKJUEcozY25UYUCJK8MNCsE8vYCTQTjT5dnaDdgpmE8gImW4EK/GB/5cGxVZnvPW2GZED7ZqU+L/KfJExJHt/V75wZyjfR2MC7EEy1/db6/BlU40S5BFQTmiiTVcA8VKQiyrIr25hn0nzMgjyERm3t+06kEy78U297olcQ1R6kHOY8fhYFIXIdBRCIegTWG8hKNDzuV/5Ak4biExVYNYNb89QRuYwJkOwX+vKq8HOmAejXyUkPwX1a6cHwQ2g+vEG1jhVMh4elnb8i+54Gdz7WheHbk75Mt11I38TFCxQI5gVwo9QgyaAEtjboO3P4zUSqFmJ8E9/RPysPm8bqRcaUaKirq+JnSqY6MELtfte2KUJyTXs2z/O5Ded3zvtXfYBQmioYhm1DZ3l1irOHy1GJbMKHeRxmom61jQXkDTCRcH9LmR6birp0QfuLdbaTtwNrVh3gQ8XsmhXAV5ofn65TMuVchxX2Jq8Qs+Bd++yYzAAi4g6N0/Pv0pYHIlztjPotml5s6fy900ZQJI1qsQ8qtfchH/qu90AsPGEW1rpVFZqcmoeEIxuVv6P0iACPPh9m3f1DQB8 EK8IN+Lq RsCYHHlobNB60OBDiL3nFJdyUTnf2KzMJm+KI6W+RXKxwYyMsCHOkeGJDYHV3Vca6Njn2E/NWVNe4S6XdWCuZQM/F6QqglBAujBVX8c2Wdzo0UZd9WX3ZbFrlCazMujHchyEU6bXtQArUBcn5z3CcuS1VrZ5SPPhVUHnUMTS6XZp2189t5+N/5pu3vkLDIHZHGw3RExP74mK1vtGn9VX/p/S5IocWQdKCErxEB2DotRoji07snHsESdNHy7relLa0dZdg9GVxyJxLyAbuKxakRGnwH2+eUSxstVtyJDsaCiS6E9jR2HANYnUG73oLK6QstmgGzPSgWJbgfmbp1At8kVRqllMcjkuM5L3u/xICXtLuWeRhbAiCZOjwedq6VKk93LJ4 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: mm/slab.h is the only place to include include/linux/slub_def.h which has allowed switching between SLAB and SLUB. Now we can simply move the contents over and remove slub_def.h. Use this opportunity to fix up some whitespace (alignment) issues. Reviewed-by: Kees Cook Signed-off-by: Vlastimil Babka Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/linux/slub_def.h | 150 ----------------------------------------------- mm/slab.h | 138 ++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 137 insertions(+), 151 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h deleted file mode 100644 index a0229ea42977..000000000000 --- a/include/linux/slub_def.h +++ /dev/null @@ -1,150 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _LINUX_SLUB_DEF_H -#define _LINUX_SLUB_DEF_H - -/* - * SLUB : A Slab allocator without object queues. - * - * (C) 2007 SGI, Christoph Lameter - */ -#include -#include -#include -#include - -#ifdef CONFIG_SLUB_CPU_PARTIAL -#define slub_percpu_partial(c) ((c)->partial) - -#define slub_set_percpu_partial(c, p) \ -({ \ - slub_percpu_partial(c) = (p)->next; \ -}) - -#define slub_percpu_partial_read_once(c) READ_ONCE(slub_percpu_partial(c)) -#else -#define slub_percpu_partial(c) NULL - -#define slub_set_percpu_partial(c, p) - -#define slub_percpu_partial_read_once(c) NULL -#endif // CONFIG_SLUB_CPU_PARTIAL - -/* - * Word size structure that can be atomically updated or read and that - * contains both the order and the number of objects that a slab of the - * given order would contain. - */ -struct kmem_cache_order_objects { - unsigned int x; -}; - -/* - * Slab cache management. - */ -struct kmem_cache { -#ifndef CONFIG_SLUB_TINY - struct kmem_cache_cpu __percpu *cpu_slab; -#endif - /* Used for retrieving partial slabs, etc. */ - slab_flags_t flags; - unsigned long min_partial; - unsigned int size; /* The size of an object including metadata */ - unsigned int object_size;/* The size of an object without metadata */ - struct reciprocal_value reciprocal_size; - unsigned int offset; /* Free pointer offset */ -#ifdef CONFIG_SLUB_CPU_PARTIAL - /* Number of per cpu partial objects to keep around */ - unsigned int cpu_partial; - /* Number of per cpu partial slabs to keep around */ - unsigned int cpu_partial_slabs; -#endif - struct kmem_cache_order_objects oo; - - /* Allocation and freeing of slabs */ - struct kmem_cache_order_objects min; - gfp_t allocflags; /* gfp flags to use on each alloc */ - int refcount; /* Refcount for slab cache destroy */ - void (*ctor)(void *); - unsigned int inuse; /* Offset to metadata */ - unsigned int align; /* Alignment */ - unsigned int red_left_pad; /* Left redzone padding size */ - const char *name; /* Name (only for display!) */ - struct list_head list; /* List of slab caches */ -#ifdef CONFIG_SYSFS - struct kobject kobj; /* For sysfs */ -#endif -#ifdef CONFIG_SLAB_FREELIST_HARDENED - unsigned long random; -#endif - -#ifdef CONFIG_NUMA - /* - * Defragmentation by allocating from a remote node. - */ - unsigned int remote_node_defrag_ratio; -#endif - -#ifdef CONFIG_SLAB_FREELIST_RANDOM - unsigned int *random_seq; -#endif - -#ifdef CONFIG_KASAN_GENERIC - struct kasan_cache kasan_info; -#endif - -#ifdef CONFIG_HARDENED_USERCOPY - unsigned int useroffset; /* Usercopy region offset */ - unsigned int usersize; /* Usercopy region size */ -#endif - - struct kmem_cache_node *node[MAX_NUMNODES]; -}; - -#if defined(CONFIG_SYSFS) && !defined(CONFIG_SLUB_TINY) -#define SLAB_SUPPORTS_SYSFS -void sysfs_slab_unlink(struct kmem_cache *); -void sysfs_slab_release(struct kmem_cache *); -#else -static inline void sysfs_slab_unlink(struct kmem_cache *s) -{ -} -static inline void sysfs_slab_release(struct kmem_cache *s) -{ -} -#endif - -void *fixup_red_left(struct kmem_cache *s, void *p); - -static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab, - void *x) { - void *object = x - (x - slab_address(slab)) % cache->size; - void *last_object = slab_address(slab) + - (slab->objects - 1) * cache->size; - void *result = (unlikely(object > last_object)) ? last_object : object; - - result = fixup_red_left(cache, result); - return result; -} - -/* Determine object index from a given position */ -static inline unsigned int __obj_to_index(const struct kmem_cache *cache, - void *addr, void *obj) -{ - return reciprocal_divide(kasan_reset_tag(obj) - addr, - cache->reciprocal_size); -} - -static inline unsigned int obj_to_index(const struct kmem_cache *cache, - const struct slab *slab, void *obj) -{ - if (is_kfence_address(obj)) - return 0; - return __obj_to_index(cache, slab_address(slab), obj); -} - -static inline int objs_per_slab(const struct kmem_cache *cache, - const struct slab *slab) -{ - return slab->objects; -} -#endif /* _LINUX_SLUB_DEF_H */ diff --git a/mm/slab.h b/mm/slab.h index 014c36ea51fa..3a8d13c099fa 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -209,7 +209,143 @@ static inline size_t slab_size(const struct slab *slab) return PAGE_SIZE << slab_order(slab); } -#include +#include +#include +#include +#include + +#ifdef CONFIG_SLUB_CPU_PARTIAL +#define slub_percpu_partial(c) ((c)->partial) + +#define slub_set_percpu_partial(c, p) \ +({ \ + slub_percpu_partial(c) = (p)->next; \ +}) + +#define slub_percpu_partial_read_once(c) READ_ONCE(slub_percpu_partial(c)) +#else +#define slub_percpu_partial(c) NULL + +#define slub_set_percpu_partial(c, p) + +#define slub_percpu_partial_read_once(c) NULL +#endif // CONFIG_SLUB_CPU_PARTIAL + +/* + * Word size structure that can be atomically updated or read and that + * contains both the order and the number of objects that a slab of the + * given order would contain. + */ +struct kmem_cache_order_objects { + unsigned int x; +}; + +/* + * Slab cache management. + */ +struct kmem_cache { +#ifndef CONFIG_SLUB_TINY + struct kmem_cache_cpu __percpu *cpu_slab; +#endif + /* Used for retrieving partial slabs, etc. */ + slab_flags_t flags; + unsigned long min_partial; + unsigned int size; /* Object size including metadata */ + unsigned int object_size; /* Object size without metadata */ + struct reciprocal_value reciprocal_size; + unsigned int offset; /* Free pointer offset */ +#ifdef CONFIG_SLUB_CPU_PARTIAL + /* Number of per cpu partial objects to keep around */ + unsigned int cpu_partial; + /* Number of per cpu partial slabs to keep around */ + unsigned int cpu_partial_slabs; +#endif + struct kmem_cache_order_objects oo; + + /* Allocation and freeing of slabs */ + struct kmem_cache_order_objects min; + gfp_t allocflags; /* gfp flags to use on each alloc */ + int refcount; /* Refcount for slab cache destroy */ + void (*ctor)(void *object); /* Object constructor */ + unsigned int inuse; /* Offset to metadata */ + unsigned int align; /* Alignment */ + unsigned int red_left_pad; /* Left redzone padding size */ + const char *name; /* Name (only for display!) */ + struct list_head list; /* List of slab caches */ +#ifdef CONFIG_SYSFS + struct kobject kobj; /* For sysfs */ +#endif +#ifdef CONFIG_SLAB_FREELIST_HARDENED + unsigned long random; +#endif + +#ifdef CONFIG_NUMA + /* + * Defragmentation by allocating from a remote node. + */ + unsigned int remote_node_defrag_ratio; +#endif + +#ifdef CONFIG_SLAB_FREELIST_RANDOM + unsigned int *random_seq; +#endif + +#ifdef CONFIG_KASAN_GENERIC + struct kasan_cache kasan_info; +#endif + +#ifdef CONFIG_HARDENED_USERCOPY + unsigned int useroffset; /* Usercopy region offset */ + unsigned int usersize; /* Usercopy region size */ +#endif + + struct kmem_cache_node *node[MAX_NUMNODES]; +}; + +#if defined(CONFIG_SYSFS) && !defined(CONFIG_SLUB_TINY) +#define SLAB_SUPPORTS_SYSFS +void sysfs_slab_unlink(struct kmem_cache *s); +void sysfs_slab_release(struct kmem_cache *s); +#else +static inline void sysfs_slab_unlink(struct kmem_cache *s) { } +static inline void sysfs_slab_release(struct kmem_cache *s) { } +#endif + +void *fixup_red_left(struct kmem_cache *s, void *p); + +static inline void *nearest_obj(struct kmem_cache *cache, + const struct slab *slab, void *x) +{ + void *object = x - (x - slab_address(slab)) % cache->size; + void *last_object = slab_address(slab) + + (slab->objects - 1) * cache->size; + void *result = (unlikely(object > last_object)) ? last_object : object; + + result = fixup_red_left(cache, result); + return result; +} + +/* Determine object index from a given position */ +static inline unsigned int __obj_to_index(const struct kmem_cache *cache, + void *addr, void *obj) +{ + return reciprocal_divide(kasan_reset_tag(obj) - addr, + cache->reciprocal_size); +} + +static inline unsigned int obj_to_index(const struct kmem_cache *cache, + const struct slab *slab, void *obj) +{ + if (is_kfence_address(obj)) + return 0; + return __obj_to_index(cache, slab_address(slab), obj); +} + +static inline int objs_per_slab(const struct kmem_cache *cache, + const struct slab *slab) +{ + return slab->objects; +} #include #include