From patchwork Mon Nov 20 18:34:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13461882 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBA72C197A0 for ; Mon, 20 Nov 2023 18:35:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7278E6B03A9; Mon, 20 Nov 2023 13:34:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5EF006B03B3; Mon, 20 Nov 2023 13:34:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 06F236B03A9; Mon, 20 Nov 2023 13:34:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id AD9FB6B03B3 for ; Mon, 20 Nov 2023 13:34:45 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 6C0DA1406A9 for ; Mon, 20 Nov 2023 18:34:45 +0000 (UTC) X-FDA: 81479183730.29.5D5EF73 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf12.hostedemail.com (Postfix) with ESMTP id 3433A40017 for ; Mon, 20 Nov 2023 18:34:42 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=Xi8x2Mai; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=NeSKI17U; dmarc=none; spf=pass (imf12.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700505283; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aVfRs6M7+9kPCQWxAD8DsCdbOnhN2KBENiJIM8UzvaI=; b=R1VDODPFghIEvHoEPy+a8xRW5CoWvpWecXm0rb2u+pmiSiiOYizdoNZEggy7zKmJJc6ym2 fqdpzGFXVV45wGaLPc7Ak6lN7He482O1HCcvQhWE5/rAM2KAOULGmVDXk+HKFfrEZ7dymQ tREk8Dk1jNPqFTvUFoFEQjDFyDsPSPE= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=Xi8x2Mai; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=NeSKI17U; dmarc=none; spf=pass (imf12.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700505283; a=rsa-sha256; cv=none; b=vvCQDrvmKF+zm66c4r2oxsbr9EjcxQD5t67Jd5g77CxChPIb/8Tp5uNwSaslqTXxBKHX/e 95WPCDoi8SkrQZunRQv5vQ2XL8w6aqszHkXh0bLrB7RyLmsCG3qGD9ksx6NLIw+gKXVkz4 k9OmFcUTG3l72X+CyKihINAEYlmoiiQ= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id E026321923; Mon, 20 Nov 2023 18:34:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1700505281; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aVfRs6M7+9kPCQWxAD8DsCdbOnhN2KBENiJIM8UzvaI=; b=Xi8x2Maic8BjPE4QHWuEuFpKZYbztsF04CB2NiqgaH23dD2MT4Ro0RlgXo8XkV19JMRS3M kLV2BIpu8F2tbfaO+Npd2ISJJ1FcIHw/6VDU8pGbFyTEWNVHIGUaGvA4oI2mg4K6CscX/F ufXvVdah3n+0pyOxq5cjF/9GgLrgbz0= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1700505281; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aVfRs6M7+9kPCQWxAD8DsCdbOnhN2KBENiJIM8UzvaI=; b=NeSKI17UTjYFXjEdataz5u2wDntJlUIz5yfeaVqFzraVs+oCBTjWRv8IIO/zRECp8Uk9nN 2wRn3emISn9DplCQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 9F8C013912; Mon, 20 Nov 2023 18:34:41 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id KDB8JsGmW2UUMgAAMHmgww (envelope-from ); Mon, 20 Nov 2023 18:34:41 +0000 From: Vlastimil Babka Date: Mon, 20 Nov 2023 19:34:21 +0100 Subject: [PATCH v2 10/21] mm/slab: move struct kmem_cache_cpu declaration to slub.c MIME-Version: 1.0 Message-Id: <20231120-slab-remove-slab-v2-10-9c9c70177183@suse.cz> References: <20231120-slab-remove-slab-v2-0-9c9c70177183@suse.cz> In-Reply-To: <20231120-slab-remove-slab-v2-0-9c9c70177183@suse.cz> To: David Rientjes , Christoph Lameter , Pekka Enberg , Joonsoo Kim Cc: Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Marco Elver , Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org, linux-hardening@vger.kernel.org, Vlastimil Babka X-Mailer: b4 0.12.4 X-Rspamd-Queue-Id: 3433A40017 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: rwiyjfwyrjzigbmto1tss5ctxi5anemd X-HE-Tag: 1700505282-718542 X-HE-Meta: U2FsdGVkX1+5yyQNVVgXkL45me2L/cg6t4/O64CvBisSIyf9h0+BCU/ukmXRzTjr8OOb5ThfZSwbkILATxqeSPCsGavZ5vwYVHWg8aFvRitHVu8/UzUHeYeuoaHSo4iLRte31Ulfit0Rk9RxDlgijo++lldDiosAL6C/HC68lGjwFF7gKj2RNpNBh+SKBi3zB0f6DjcqmFDrFzAcqFRq9aLKL7DSfu0qjy2iOviWWKFtpzqtp7BQwCGgPKwtTGL0yf6SDv9PVZGOyGhvxTI55+pVZfLZAhEsyhSebjs2Fzkct6STrEwCEJ2Ogu5bFs+c1TfemFMUccLYs2BCM7StPRwOyzf/vyx+mJ9Sua8gG/Pa49YnTgpG7azAgrMHmf7+ULbfG9yjQa9OETzjitblLZfwXlh6fs68zgl2mqEMSRTEm91F8vhzjqrtENO+tCtf78wzHRr/HpYvpdcFOuhPnXIn0QPLI8xfOOMrtfClkdt6ytmaZKetnIa287xt+1KA5SXEKaUKp7U530yX2PAwW9FHbfvhhAbDBx2cq0hPgIRGNBPweycgxBS3IQkFfvrxQY2sR3V8TifsiLfL3ZFVQpf9QZNKCk/Upx9LbJjNW+pg2iWzKGuMTpZ3A9HTTkdG3EpZvZPhm7iUwD6bs89MyPB/cD0V1LK3yO9GaVzSEh3z3nOvtZnHrZXUK3mOLoOzsxS+EHb+A7oYWBgZKr2lfZDklacuL0U2zWzeJtWWqIEwseGSzm7MHYkDsH1YUre/Rdf5SaS0FnNZD0V6Lnvz/1LCS9uVKaNonHMk/icr+SfUpiOCw1qt8cVSexnMG1VoqpaOwc0jypBo6qLJL8tIB9ClItII/96/aGEUALhcuYgU2eh+fPF4/d+oNR8mghKWtjCCEz78K18Z27lZ1WTMaKivOBo54UdbndF9DdBWJTWBjf7KzhlKb+YxZSroQxTKqpJV9e0Ny88GR20fYt1 7TxxLWSu LKEcUsfdTGrp1rWObgL2sezGOdT5SbN/oi0RvzTXe+y0GCE1vjb/12tNhqoFwfUxf8Nf5XQPt69MbLkB55l+yAVwGGR01M4SHpcR5wvNHVMlpNzYpdTXaahoH+27dzcxp73jkXJVrJ5jGWSnXApXQJjxHQ9JVcTvddn4fIe2WvRIvilRA6o46m1EoJahLK+IN5ASmUHAHlV3hYAD+wLIYL5JjogKLzfn0H/56+UXc9HJ+gDfITOgd/ayhVsxUbBxvUrz273SUbOB5pAdvl8C5MDBVmA0we2jxIK9eEXpziaTL80oT3m+bC4bL+sMTuT1MGSxomXUpgqLf44YaZrnJ0O6bH032BnpkBDjsbZgqE0jOeRYlBn8Tgc1ZAw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Nothing outside SLUB itself accesses the struct kmem_cache_cpu fields so it does not need to be declared in slub_def.h. This allows also to move enum stat_item. Reviewed-by: Kees Cook Signed-off-by: Vlastimil Babka Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/linux/slub_def.h | 54 ------------------------------------------------ mm/slub.c | 54 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 54 insertions(+), 54 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index deb90cf4bffb..a0229ea42977 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -12,60 +12,6 @@ #include #include -enum stat_item { - ALLOC_FASTPATH, /* Allocation from cpu slab */ - ALLOC_SLOWPATH, /* Allocation by getting a new cpu slab */ - FREE_FASTPATH, /* Free to cpu slab */ - FREE_SLOWPATH, /* Freeing not to cpu slab */ - FREE_FROZEN, /* Freeing to frozen slab */ - FREE_ADD_PARTIAL, /* Freeing moves slab to partial list */ - FREE_REMOVE_PARTIAL, /* Freeing removes last object */ - ALLOC_FROM_PARTIAL, /* Cpu slab acquired from node partial list */ - ALLOC_SLAB, /* Cpu slab acquired from page allocator */ - ALLOC_REFILL, /* Refill cpu slab from slab freelist */ - ALLOC_NODE_MISMATCH, /* Switching cpu slab */ - FREE_SLAB, /* Slab freed to the page allocator */ - CPUSLAB_FLUSH, /* Abandoning of the cpu slab */ - DEACTIVATE_FULL, /* Cpu slab was full when deactivated */ - DEACTIVATE_EMPTY, /* Cpu slab was empty when deactivated */ - DEACTIVATE_TO_HEAD, /* Cpu slab was moved to the head of partials */ - DEACTIVATE_TO_TAIL, /* Cpu slab was moved to the tail of partials */ - DEACTIVATE_REMOTE_FREES,/* Slab contained remotely freed objects */ - DEACTIVATE_BYPASS, /* Implicit deactivation */ - ORDER_FALLBACK, /* Number of times fallback was necessary */ - CMPXCHG_DOUBLE_CPU_FAIL,/* Failure of this_cpu_cmpxchg_double */ - CMPXCHG_DOUBLE_FAIL, /* Number of times that cmpxchg double did not match */ - CPU_PARTIAL_ALLOC, /* Used cpu partial on alloc */ - CPU_PARTIAL_FREE, /* Refill cpu partial on free */ - CPU_PARTIAL_NODE, /* Refill cpu partial from node partial */ - CPU_PARTIAL_DRAIN, /* Drain cpu partial to node partial */ - NR_SLUB_STAT_ITEMS -}; - -#ifndef CONFIG_SLUB_TINY -/* - * When changing the layout, make sure freelist and tid are still compatible - * with this_cpu_cmpxchg_double() alignment requirements. - */ -struct kmem_cache_cpu { - union { - struct { - void **freelist; /* Pointer to next available object */ - unsigned long tid; /* Globally unique transaction id */ - }; - freelist_aba_t freelist_tid; - }; - struct slab *slab; /* The slab from which we are allocating */ -#ifdef CONFIG_SLUB_CPU_PARTIAL - struct slab *partial; /* Partially allocated frozen slabs */ -#endif - local_lock_t lock; /* Protects the fields above */ -#ifdef CONFIG_SLUB_STATS - unsigned stat[NR_SLUB_STAT_ITEMS]; -#endif -}; -#endif /* CONFIG_SLUB_TINY */ - #ifdef CONFIG_SLUB_CPU_PARTIAL #define slub_percpu_partial(c) ((c)->partial) diff --git a/mm/slub.c b/mm/slub.c index 3e01731783df..979932d046fd 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -330,6 +330,60 @@ static void debugfs_slab_add(struct kmem_cache *); static inline void debugfs_slab_add(struct kmem_cache *s) { } #endif +enum stat_item { + ALLOC_FASTPATH, /* Allocation from cpu slab */ + ALLOC_SLOWPATH, /* Allocation by getting a new cpu slab */ + FREE_FASTPATH, /* Free to cpu slab */ + FREE_SLOWPATH, /* Freeing not to cpu slab */ + FREE_FROZEN, /* Freeing to frozen slab */ + FREE_ADD_PARTIAL, /* Freeing moves slab to partial list */ + FREE_REMOVE_PARTIAL, /* Freeing removes last object */ + ALLOC_FROM_PARTIAL, /* Cpu slab acquired from node partial list */ + ALLOC_SLAB, /* Cpu slab acquired from page allocator */ + ALLOC_REFILL, /* Refill cpu slab from slab freelist */ + ALLOC_NODE_MISMATCH, /* Switching cpu slab */ + FREE_SLAB, /* Slab freed to the page allocator */ + CPUSLAB_FLUSH, /* Abandoning of the cpu slab */ + DEACTIVATE_FULL, /* Cpu slab was full when deactivated */ + DEACTIVATE_EMPTY, /* Cpu slab was empty when deactivated */ + DEACTIVATE_TO_HEAD, /* Cpu slab was moved to the head of partials */ + DEACTIVATE_TO_TAIL, /* Cpu slab was moved to the tail of partials */ + DEACTIVATE_REMOTE_FREES,/* Slab contained remotely freed objects */ + DEACTIVATE_BYPASS, /* Implicit deactivation */ + ORDER_FALLBACK, /* Number of times fallback was necessary */ + CMPXCHG_DOUBLE_CPU_FAIL,/* Failures of this_cpu_cmpxchg_double */ + CMPXCHG_DOUBLE_FAIL, /* Failures of slab freelist update */ + CPU_PARTIAL_ALLOC, /* Used cpu partial on alloc */ + CPU_PARTIAL_FREE, /* Refill cpu partial on free */ + CPU_PARTIAL_NODE, /* Refill cpu partial from node partial */ + CPU_PARTIAL_DRAIN, /* Drain cpu partial to node partial */ + NR_SLUB_STAT_ITEMS +}; + +#ifndef CONFIG_SLUB_TINY +/* + * When changing the layout, make sure freelist and tid are still compatible + * with this_cpu_cmpxchg_double() alignment requirements. + */ +struct kmem_cache_cpu { + union { + struct { + void **freelist; /* Pointer to next available object */ + unsigned long tid; /* Globally unique transaction id */ + }; + freelist_aba_t freelist_tid; + }; + struct slab *slab; /* The slab from which we are allocating */ +#ifdef CONFIG_SLUB_CPU_PARTIAL + struct slab *partial; /* Partially allocated frozen slabs */ +#endif + local_lock_t lock; /* Protects the fields above */ +#ifdef CONFIG_SLUB_STATS + unsigned int stat[NR_SLUB_STAT_ITEMS]; +#endif +}; +#endif /* CONFIG_SLUB_TINY */ + static inline void stat(const struct kmem_cache *s, enum stat_item si) { #ifdef CONFIG_SLUB_STATS