From patchwork Mon Nov 20 09:12:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiongwei Song X-Patchwork-Id: 13460975 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8B6FC54E76 for ; Mon, 20 Nov 2023 09:13:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 895F86B042E; Mon, 20 Nov 2023 04:13:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 81ADE6B042F; Mon, 20 Nov 2023 04:13:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BC906B0430; Mon, 20 Nov 2023 04:13:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 5895B6B042E for ; Mon, 20 Nov 2023 04:13:05 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 311CDA04CD for ; Mon, 20 Nov 2023 09:13:05 +0000 (UTC) X-FDA: 81477768330.06.E56AB8B Received: from pv50p00im-ztdg10011201.me.com (pv50p00im-ztdg10011201.me.com [17.58.6.39]) by imf19.hostedemail.com (Postfix) with ESMTP id 433D71A001D for ; Mon, 20 Nov 2023 09:13:03 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=me.com header.s=1a1hai header.b=rYi1u8wZ; spf=pass (imf19.hostedemail.com: domain of sxwjean@me.com designates 17.58.6.39 as permitted sender) smtp.mailfrom=sxwjean@me.com; dmarc=pass (policy=quarantine) header.from=me.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700471583; a=rsa-sha256; cv=none; b=4t3u1mR+rXNlIFVKU58BL92Oirtqu3OfYs8VPTg+h9F9TazzUCnkjMmzR8kCo13w3KXIKs p4RfG/a/VP/fttVz9CIAMA01B4riGszR5R3QH8u+Xhp+vm6Io3mgJBIoBom3E4/blUEMl/ iI7zSEXHVG3eAvM9hj0mRBvV6erdUpE= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=me.com header.s=1a1hai header.b=rYi1u8wZ; spf=pass (imf19.hostedemail.com: domain of sxwjean@me.com designates 17.58.6.39 as permitted sender) smtp.mailfrom=sxwjean@me.com; dmarc=pass (policy=quarantine) header.from=me.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700471583; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AWaj1ki3rwcuwpTBAPDzYvQWi1J126ZZaCIJ2Nblgd8=; b=RSjKcd2wruZePZjbVp0q+PFa3r/CV/kHg76A7/mrvspHIVp1cnC2R+FMEe6TkipeFBAOXO RXa9WOpvKuFnv6JPMZF/nwgf6slMygdvkc0TNtRT/Mt0Cloj/eMJQyIhtHQ1nEz4qlSecF mTe+fjhvm7sVfgbio/Eqgj18da4d8RE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=me.com; s=1a1hai; t=1700471581; bh=AWaj1ki3rwcuwpTBAPDzYvQWi1J126ZZaCIJ2Nblgd8=; h=From:To:Subject:Date:Message-Id:MIME-Version; b=rYi1u8wZOp0OrfiANALqBBzTEN2M0f6lGNJBSLz6HLwevM0aEhdQWh9aoZkirKdFM NergaZPv15dFBOa6yN26VLwf60oecQ2t1knY4//RG/5gLDDgpr5uOiFdyPBu1dFuK2 uy0BPrTW6OQTPP2WBeurw/gtydeSpsyHK3GGRcaP/eV3OT+42TEaJ8U6TACpXKliX9 39d4pmfOhcG/iHZdyYJWpzdinJSiDGZvUqdrVJdeVscEPGZn85OitiKchCZ9vAVSBT Nx7zkD344/z0XjshDlf1XVTEi6cOESbvkOasJobMN7rZ1ND1c+fshy8KEdgyoI9x45 W2ZV4djkODAqg== Received: from xiongwei.. (pv50p00im-dlb-asmtp-mailmevip.me.com [17.56.9.10]) by pv50p00im-ztdg10011201.me.com (Postfix) with ESMTPSA id BE336680246; Mon, 20 Nov 2023 09:12:56 +0000 (UTC) From: sxwjean@me.com To: cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com Cc: corbet@lwn.net, linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/4] mm/slab: move slab merge from slab_common.c to slub.c Date: Mon, 20 Nov 2023 17:12:14 +0800 Message-Id: <20231120091214.150502-5-sxwjean@me.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231120091214.150502-1-sxwjean@me.com> References: <20231120091214.150502-1-sxwjean@me.com> MIME-Version: 1.0 X-Proofpoint-GUID: wmdiAmmV3HG-u_79uyar0KcaBugWOyer X-Proofpoint-ORIG-GUID: wmdiAmmV3HG-u_79uyar0KcaBugWOyer X-Proofpoint-Virus-Version: =?utf-8?q?vendor=3Dfsecure_engine=3D1=2E1=2E170-?= =?utf-8?q?22c6f66c430a71ce266a39bfe25bc2903e8d5c8f=3A6=2E0=2E517=2C18=2E0?= =?utf-8?q?=2E883=2C17=2E0=2E605=2E474=2E0000000_definitions=3D2022-06-21=5F?= =?utf-8?q?08=3A2022-06-21=5F01=2C2022-06-21=5F08=2C2020-01-23=5F02_signatur?= =?utf-8?q?es=3D0?= X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 suspectscore=0 bulkscore=0 clxscore=1015 adultscore=0 phishscore=0 mlxscore=0 malwarescore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2311200061 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 433D71A001D X-Stat-Signature: p7wrc9qi1qks1z3ih3afxf6ed4oajado X-Rspam-User: X-HE-Tag: 1700471583-825252 X-HE-Meta: U2FsdGVkX19SBVzETrLZNe1g8xlh8wLE4t5D2LCGYEyihsVFL4T/6yaT2sj07XeCJVu9lU9DHA+ukuRU170bEaJhRG+brWs93dXfFVbCmwTOdDTGHIYyjm/2SBOo+nfGRQnf7EeAMahgJapuyTsmA7lN5Cj/n4ygzpRPxRsH6VjehvBGl0iu9JimiayGAxGvipz4+ZKlzLSa8MrUU/RU5wa9yqKlNhOSmbx/qJkMjJ+9bBgPzBeb50UX8eXqCtv4dm6uc/T7Lde5CJNxftWnJK2fm2ast+Y2cGG/E2WdZ5a8kCmkBmSlOHkcOc6Wfqle/LFC+0l20dog64o+xrwaoiL6hZRIEMmi8E5rFBiIVSAeMU26BEluCpoIehOy13rSck19+1WIaqolBB9jGNJ0mE/0Mw3ZwwfE2x4Q0I+aUmyfnjX8bccE/Ptq2uH3LkC77LUdP+adhYC0q2SPWln5x5efNnKDqkiSYQWWSaQtwGfkpKkua7zbzTC9fikou2SDepcmogyrxzx2Ozgno4SARGNMZZ3lMGHPN5D1w3ztxwcMpcJsAt4jooFsyZMQ9Gq1o7ZQUDhel7YxtR3sjXcpHNMbPOOQeBZHFW8SBRzXLSLEm/7PFrtm/EGwHRnEqfwFY+pWx07r3KV5AfX3qdpfoBBAqzImoVfNJeDyGqJNZh2XOhp29WVJOfCxKrJqjFvOeJj26dUfs4zAR/uX/ILheszEUO6kWlJwzPjKGVMdl8UGdiOZdpuZ+UVwWpwn/g+zUoaGPeXIlq4oWJ/mMlYvgOwzKHF9l0PzVI3i79DG51e/vdjkW6uPpKRKKDXmwFgNCtD617UDny6TMfHhnbGay2lSqdKzfkmNWJkZAiOoMJGvFJER65pLG7eq/zmRW81xoTbJasIaNsUHjrySyRtiFmlWshWkqDWZNICaAlyuxoaDpVnVBptKuvELdLkiEp+fxDE67zt5ppJk+YAMP+L LwL6PAi6 biFJdMAvD+5Y/fmjP5uLoAXaWf6XFExM51ANn2svg1sa09ry9z64FNlUd8WwXgrAnQsaTtotCAd3xYFmMoPNbIy0fo21088D4EMpRglytm5BrggpHc1ojVJS3Dw35xhIjrBe5BvZuxSC/Uk871x5krYMow4ULa6ruhd5WVmO2kA4BVhpsUO4wqu0kZEgbiJt+eLjXiQda27Kg4mYWhuAncsq1s8oDc6XE2RgcLu0gulyhJxwX807ze7Vz6ws0PWQPoK1iaH5++TzWKJZMIadjN6LaTAE1fwycg/zdIb1Xrp6zkOvsFwCm+3V5coxmC62D9lqjmDY/GmlJqk/xkMeu2FP9ezXdqXpJcSFSxiAHoel0Pz2UV0avSwI0FBrS5+LK3fcBmYRixNEteyU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Xiongwei Song Since slab allocator has been removed. There is no users about slab merge except slub. This commit is almost to revert commit 423c929cbbec ("mm/slab_common: commonize slab merge logic"). Also change all prefix of slab merge related functions, variables and definitions from "slab/SLAB" to"slub/SLUB". Signed-off-by: Xiongwei Song --- mm/slab.h | 3 -- mm/slab_common.c | 98 ---------------------------------------------- mm/slub.c | 100 ++++++++++++++++++++++++++++++++++++++++++++++- 3 files changed, 99 insertions(+), 102 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 8d20f8c6269d..cd52e705ce28 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -429,9 +429,6 @@ extern void create_boot_cache(struct kmem_cache *, const char *name, unsigned int calculate_alignment(slab_flags_t flags, unsigned int align, unsigned int size); -int slab_unmergeable(struct kmem_cache *s); -struct kmem_cache *find_mergeable(unsigned size, unsigned align, - slab_flags_t flags, const char *name, void (*ctor)(void *)); struct kmem_cache * __kmem_cache_alias(const char *name, unsigned int size, unsigned int align, slab_flags_t flags, void (*ctor)(void *)); diff --git a/mm/slab_common.c b/mm/slab_common.c index 62eb77fdedf2..6960ae5c35ee 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -45,36 +45,6 @@ static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work); static DECLARE_WORK(slab_caches_to_rcu_destroy_work, slab_caches_to_rcu_destroy_workfn); -/* - * Set of flags that will prevent slab merging - */ -#define SLAB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \ - SLAB_TRACE | SLAB_TYPESAFE_BY_RCU | SLAB_NOLEAKTRACE | \ - SLAB_FAILSLAB | SLAB_NO_MERGE | kasan_never_merge()) - -#define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \ - SLAB_CACHE_DMA32 | SLAB_ACCOUNT) - -/* - * Merge control. If this is set then no merging of slab caches will occur. - */ -static bool slub_nomerge = !IS_ENABLED(CONFIG_SLAB_MERGE_DEFAULT); - -static int __init setup_slab_nomerge(char *str) -{ - slub_nomerge = true; - return 1; -} - -static int __init setup_slab_merge(char *str) -{ - slub_nomerge = false; - return 1; -} - -__setup_param("slub_nomerge", slub_nomerge, setup_slab_nomerge, 0); -__setup_param("slub_merge", slub_merge, setup_slab_merge, 0); - /* * Determine the size of a slab object */ @@ -130,74 +100,6 @@ unsigned int calculate_alignment(slab_flags_t flags, return ALIGN(align, sizeof(void *)); } -/* - * Find a mergeable slab cache - */ -int slab_unmergeable(struct kmem_cache *s) -{ - if (slub_nomerge || (s->flags & SLAB_NEVER_MERGE)) - return 1; - - if (s->ctor) - return 1; - -#ifdef CONFIG_HARDENED_USERCOPY - if (s->usersize) - return 1; -#endif - - /* - * We may have set a slab to be unmergeable during bootstrap. - */ - if (s->refcount < 0) - return 1; - - return 0; -} - -struct kmem_cache *find_mergeable(unsigned int size, unsigned int align, - slab_flags_t flags, const char *name, void (*ctor)(void *)) -{ - struct kmem_cache *s; - - if (slub_nomerge) - return NULL; - - if (ctor) - return NULL; - - size = ALIGN(size, sizeof(void *)); - align = calculate_alignment(flags, align, size); - size = ALIGN(size, align); - flags = kmem_cache_flags(size, flags, name); - - if (flags & SLAB_NEVER_MERGE) - return NULL; - - list_for_each_entry_reverse(s, &slab_caches, list) { - if (slab_unmergeable(s)) - continue; - - if (size > s->size) - continue; - - if ((flags & SLAB_MERGE_SAME) != (s->flags & SLAB_MERGE_SAME)) - continue; - /* - * Check if alignment is compatible. - * Courtesy of Adrian Drzewiecki - */ - if ((s->size & ~(align - 1)) != s->size) - continue; - - if (s->size - size >= sizeof(void *)) - continue; - - return s; - } - return NULL; -} - static struct kmem_cache *create_cache(const char *name, unsigned int object_size, unsigned int align, slab_flags_t flags, unsigned int useroffset, diff --git a/mm/slub.c b/mm/slub.c index ae1e6e635253..435d9ed140e4 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -709,6 +709,104 @@ static inline bool slab_update_freelist(struct kmem_cache *s, struct slab *slab, return false; } +/* + * Set of flags that will prevent slab merging + */ +#define SLUB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \ + SLAB_TRACE | SLAB_TYPESAFE_BY_RCU | SLAB_NOLEAKTRACE | \ + SLAB_FAILSLAB | SLAB_NO_MERGE | kasan_never_merge()) + +#define SLUB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \ + SLAB_CACHE_DMA32 | SLAB_ACCOUNT) + +/* + * Merge control. If this is set then no merging of slab caches will occur. + */ +static bool slub_nomerge = !IS_ENABLED(CONFIG_SLAB_MERGE_DEFAULT); + +static int __init setup_slub_nomerge(char *str) +{ + slub_nomerge = true; + return 1; +} + +static int __init setup_slub_merge(char *str) +{ + slub_nomerge = false; + return 1; +} + +__setup_param("slub_nomerge", slub_nomerge, setup_slab_nomerge, 0); +__setup_param("slub_merge", slub_merge, setup_slab_merge, 0); + +/* + * Find a mergeable slab cache + */ +static inline int slub_unmergeable(struct kmem_cache *s) +{ + if (slub_nomerge || (s->flags & SLUB_NEVER_MERGE)) + return 1; + + if (s->ctor) + return 1; + +#ifdef CONFIG_HARDENED_USERCOPY + if (s->usersize) + return 1; +#endif + + /* + * We may have set a slab to be unmergeable during bootstrap. + */ + if (s->refcount < 0) + return 1; + + return 0; +} + +static struct kmem_cache *find_mergeable(unsigned int size, unsigned int align, + slab_flags_t flags, const char *name, void (*ctor)(void *)) +{ + struct kmem_cache *s; + + if (slub_nomerge) + return NULL; + + if (ctor) + return NULL; + + size = ALIGN(size, sizeof(void *)); + align = calculate_alignment(flags, align, size); + size = ALIGN(size, align); + flags = kmem_cache_flags(size, flags, name); + + if (flags & SLUB_NEVER_MERGE) + return NULL; + + list_for_each_entry_reverse(s, &slab_caches, list) { + if (slub_unmergeable(s)) + continue; + + if (size > s->size) + continue; + + if ((flags & SLUB_MERGE_SAME) != (s->flags & SLUB_MERGE_SAME)) + continue; + /* + * Check if alignment is compatible. + * Courtesy of Adrian Drzewiecki + */ + if ((s->size & ~(align - 1)) != s->size) + continue; + + if (s->size - size >= sizeof(void *)) + continue; + + return s; + } + return NULL; +} + #ifdef CONFIG_SLUB_DEBUG static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)]; static DEFINE_SPINLOCK(object_map_lock); @@ -6679,7 +6777,7 @@ static int sysfs_slab_add(struct kmem_cache *s) int err; const char *name; struct kset *kset = cache_kset(s); - int unmergeable = slab_unmergeable(s); + int unmergeable = slub_unmergeable(s); if (!unmergeable && disable_higher_order_debug && (slub_debug & DEBUG_METADATA_FLAGS))