From patchwork Mon Oct 31 13:47:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liu Shixin X-Patchwork-Id: 13025833 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 610E8FA3742 for ; Mon, 31 Oct 2022 12:59:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 902E96B0078; Mon, 31 Oct 2022 08:59:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 84DE86B0074; Mon, 31 Oct 2022 08:59:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 715946B007B; Mon, 31 Oct 2022 08:59:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 4BE266B0074 for ; Mon, 31 Oct 2022 08:59:44 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 04BA4C0C2E for ; Mon, 31 Oct 2022 12:59:43 +0000 (UTC) X-FDA: 80081251488.28.4D5D34A Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf08.hostedemail.com (Postfix) with ESMTP id 4762C160039 for ; Mon, 31 Oct 2022 12:59:40 +0000 (UTC) Received: from dggpemm500024.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4N1Cly0LVHzmVdp; Mon, 31 Oct 2022 20:54:38 +0800 (CST) Received: from dggpemm100009.china.huawei.com (7.185.36.113) by dggpemm500024.china.huawei.com (7.185.36.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 31 Oct 2022 20:59:36 +0800 Received: from huawei.com (10.175.113.32) by dggpemm100009.china.huawei.com (7.185.36.113) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 31 Oct 2022 20:59:35 +0800 From: Liu Shixin To: Christoph Lameter , Pekka Enberg , "David Rientjes" , Joonsoo Kim , "Andrew Morton" , Vlastimil Babka , "Roman Gushchin" , Hyeonggon Yoo <42.hyeyoo@gmail.com> CC: , , Liu Shixin Subject: [PATCH v2 1/3] mm/slab_common: Move cache_name to create_cache() Date: Mon, 31 Oct 2022 21:47:45 +0800 Message-ID: <20221031134747.3049593-2-liushixin2@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221031134747.3049593-1-liushixin2@huawei.com> References: <20221031134747.3049593-1-liushixin2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.32] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm100009.china.huawei.com (7.185.36.113) X-CFilter-Loop: Reflected ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=liushixin2@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1667221183; a=rsa-sha256; cv=none; b=Y5PVpbHji1PpX/+fmao7tkWoHWzXBG84BF3HQLVcc//g9FK2eI1g+9ceRG8Wnk8J7249dW xaepizguGnUSkJPt/fwDOnhXDWNXTcuEBJ00Ak+tkocPTjxmCbmQWLK4E2DNT4zu968VWk 5FaFGSicVT8Un2beBwdvz28t7diZNgg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1667221183; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=leyZkFm/H1Ug67EGoS9cgja/SPXbwRKk1/xB4yKJAtU=; b=l+KMZtkVweHaQzusOUUrcDgxpEJAB8fREmnp3f4VeK6u5VbwMZ4FjhVof36z6R+LZ3gYKY Rt8WHSwC8yQ/06oTv1eytN6iHw0/BiRrLlN8qeSDfqUY9su0iwKAp0v8f6eD0IDiDLL2xs q6CN7IRMhLodHrBgEuq1FVktKZfBrAs= X-Rspam-User: X-Rspamd-Queue-Id: 4762C160039 Authentication-Results: imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=liushixin2@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Stat-Signature: 4db7eq4jcjuodbxi1ex7h4kadrerddrj X-Rspamd-Server: rspam10 X-HE-Tag: 1667221180-850792 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The string cache_name and its kmem_cache have same life cycle. The latter is allocated in create_cache() so move cache_name to create_cache() too for better error handing. Signed-off-by: Liu Shixin --- mm/slab_common.c | 34 ++++++++++++++-------------------- 1 file changed, 14 insertions(+), 20 deletions(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 33b1886b06eb..e5f430a17d95 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -209,17 +209,21 @@ static struct kmem_cache *create_cache(const char *name, struct kmem_cache *root_cache) { struct kmem_cache *s; - int err; + const char *cache_name; + int err = -ENOMEM; if (WARN_ON(useroffset + usersize > object_size)) useroffset = usersize = 0; - err = -ENOMEM; s = kmem_cache_zalloc(kmem_cache, GFP_KERNEL); if (!s) - goto out; + return ERR_PTR(err); - s->name = name; + cache_name = kstrdup_const(name, GFP_KERNEL); + if (!cache_name) + goto out_free_cache; + + s->name = cache_name; s->size = s->object_size = object_size; s->align = align; s->ctor = ctor; @@ -228,18 +232,17 @@ static struct kmem_cache *create_cache(const char *name, err = __kmem_cache_create(s, flags); if (err) - goto out_free_cache; + goto out_free_name; s->refcount = 1; list_add(&s->list, &slab_caches); -out: - if (err) - return ERR_PTR(err); return s; +out_free_name: + kfree_const(s->name); out_free_cache: kmem_cache_free(kmem_cache, s); - goto out; + return ERR_PTR(err); } /** @@ -278,7 +281,6 @@ kmem_cache_create_usercopy(const char *name, void (*ctor)(void *)) { struct kmem_cache *s = NULL; - const char *cache_name; int err; #ifdef CONFIG_SLUB_DEBUG @@ -326,19 +328,11 @@ kmem_cache_create_usercopy(const char *name, if (s) goto out_unlock; - cache_name = kstrdup_const(name, GFP_KERNEL); - if (!cache_name) { - err = -ENOMEM; - goto out_unlock; - } - - s = create_cache(cache_name, size, + s = create_cache(name, size, calculate_alignment(flags, align, size), flags, useroffset, usersize, ctor, NULL); - if (IS_ERR(s)) { + if (IS_ERR(s)) err = PTR_ERR(s); - kfree_const(cache_name); - } out_unlock: mutex_unlock(&slab_mutex); From patchwork Mon Oct 31 13:47:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liu Shixin X-Patchwork-Id: 13025835 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2BEFFA3744 for ; Mon, 31 Oct 2022 12:59:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF08A6B0074; Mon, 31 Oct 2022 08:59:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 940B76B007B; Mon, 31 Oct 2022 08:59:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7D7DB8E0001; Mon, 31 Oct 2022 08:59:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 506E66B0075 for ; Mon, 31 Oct 2022 08:59:44 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id E308CC0BE7 for ; Mon, 31 Oct 2022 12:59:43 +0000 (UTC) X-FDA: 80081251446.03.B64E611 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf13.hostedemail.com (Postfix) with ESMTP id 9DF9E20007 for ; Mon, 31 Oct 2022 12:59:41 +0000 (UTC) Received: from dggpemm500021.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4N1Cnd6PHTzpW7W; Mon, 31 Oct 2022 20:56:05 +0800 (CST) Received: from dggpemm100009.china.huawei.com (7.185.36.113) by dggpemm500021.china.huawei.com (7.185.36.109) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 31 Oct 2022 20:59:36 +0800 Received: from huawei.com (10.175.113.32) by dggpemm100009.china.huawei.com (7.185.36.113) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 31 Oct 2022 20:59:36 +0800 From: Liu Shixin To: Christoph Lameter , Pekka Enberg , "David Rientjes" , Joonsoo Kim , "Andrew Morton" , Vlastimil Babka , "Roman Gushchin" , Hyeonggon Yoo <42.hyeyoo@gmail.com> CC: , , Liu Shixin Subject: [PATCH v2 2/3] mm/slub: Refactor __kmem_cache_create() Date: Mon, 31 Oct 2022 21:47:46 +0800 Message-ID: <20221031134747.3049593-3-liushixin2@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221031134747.3049593-1-liushixin2@huawei.com> References: <20221031134747.3049593-1-liushixin2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.32] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm100009.china.huawei.com (7.185.36.113) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1667221183; a=rsa-sha256; cv=none; b=DrNrXA3nuDUEwWAZRKm9XHsNBYEgKHPkPw8yUDpb0M3rpoN+cR1D3vzj03c3a8W3SeHs03 0pgTIZFVa7S7+nBiFhFgx38UD2tTfHH9ZCswhlTBV4mtEKfJVKZOESk+fvwCGZVSn52CHz ItmW8mrZpYo2Ne7a5NqkuYufQLMwM34= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=liushixin2@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1667221183; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=90ANJqgOdPu0wHv5XfKko3Pg1F/mhKrEsE/exsdCtZU=; b=S3cxJaNv8jKtLEbJ3XDzHyr+aI9mg1NXkcHDgjY4J9Yidiq6cdjDxu17jJFjOGD6u68cb8 Rw6mM86MJt/KGhOyrGSiAcVYa6BUlwC++lh7tKOuJkSQeE88ZSuDW/l5GXRmwW+oVY0DZh h58RL/nJNz1roIb6Fdj9x98h2Zhab5g= Authentication-Results: imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=liushixin2@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 9DF9E20007 X-Stat-Signature: 8nf15mhojemggdgbchkd73h4etbuiruz X-Rspam-User: X-HE-Tag: 1667221181-591879 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Separate sysfs_slab_add() and debugfs_slab_add() from __kmem_cache_create() can help to fix a memory leak about kobject. After this patch, we can fix the memory leak naturally by calling kobject_put() to free kobject and associated kmem_cache when sysfs_slab_add() failed. Besides, after that, we can easy to provide sysfs and debugfs support for other allocators too. Signed-off-by: Liu Shixin --- include/linux/slub_def.h | 11 ++++++++++ mm/slab_common.c | 12 +++++++++++ mm/slub.c | 44 +++++++--------------------------------- 3 files changed, 30 insertions(+), 37 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index f9c68a9dac04..26d56c4c74d1 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -144,9 +144,14 @@ struct kmem_cache { #ifdef CONFIG_SYSFS #define SLAB_SUPPORTS_SYSFS +int sysfs_slab_add(struct kmem_cache *); void sysfs_slab_unlink(struct kmem_cache *); void sysfs_slab_release(struct kmem_cache *); #else +static inline int sysfs_slab_add(struct kmem_cache *s) +{ + return 0; +} static inline void sysfs_slab_unlink(struct kmem_cache *s) { } @@ -155,6 +160,12 @@ static inline void sysfs_slab_release(struct kmem_cache *s) } #endif +#if defined(CONFIG_DEBUG_FS) && defined(CONFIG_SLUB_DEBUG) +void debugfs_slab_add(struct kmem_cache *); +#else +static inline void debugfs_slab_add(struct kmem_cache *s) { } +#endif + void *fixup_red_left(struct kmem_cache *s, void *p); static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab, diff --git a/mm/slab_common.c b/mm/slab_common.c index e5f430a17d95..55e2cf064dfe 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -234,6 +234,18 @@ static struct kmem_cache *create_cache(const char *name, if (err) goto out_free_name; +#ifdef SLAB_SUPPORTS_SYSFS + /* Mutex is not taken during early boot */ + if (slab_state >= FULL) { + err = sysfs_slab_add(s); + if (err) { + slab_kmem_cache_release(s); + return ERR_PTR(err); + } + debugfs_slab_add(s); + } +#endif + s->refcount = 1; list_add(&s->list, &slab_caches); return s; diff --git a/mm/slub.c b/mm/slub.c index ba94eb6fda78..a1ad759753ce 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -299,20 +299,12 @@ struct track { enum track_item { TRACK_ALLOC, TRACK_FREE }; #ifdef CONFIG_SYSFS -static int sysfs_slab_add(struct kmem_cache *); static int sysfs_slab_alias(struct kmem_cache *, const char *); #else -static inline int sysfs_slab_add(struct kmem_cache *s) { return 0; } static inline int sysfs_slab_alias(struct kmem_cache *s, const char *p) { return 0; } #endif -#if defined(CONFIG_DEBUG_FS) && defined(CONFIG_SLUB_DEBUG) -static void debugfs_slab_add(struct kmem_cache *); -#else -static inline void debugfs_slab_add(struct kmem_cache *s) { } -#endif - static inline void stat(const struct kmem_cache *s, enum stat_item si) { #ifdef CONFIG_SLUB_STATS @@ -4297,7 +4289,7 @@ static int calculate_sizes(struct kmem_cache *s) return !!oo_objects(s->oo); } -static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) +int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) { s->flags = kmem_cache_flags(s->size, flags, s->name); #ifdef CONFIG_SLAB_FREELIST_HARDENED @@ -4900,30 +4892,6 @@ __kmem_cache_alias(const char *name, unsigned int size, unsigned int align, return s; } -int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) -{ - int err; - - err = kmem_cache_open(s, flags); - if (err) - return err; - - /* Mutex is not taken during early boot */ - if (slab_state <= UP) - return 0; - - err = sysfs_slab_add(s); - if (err) { - __kmem_cache_release(s); - return err; - } - - if (s->flags & SLAB_STORE_USER) - debugfs_slab_add(s); - - return 0; -} - #ifdef CONFIG_SYSFS static int count_inuse(struct slab *slab) { @@ -5913,7 +5881,7 @@ static char *create_unique_id(struct kmem_cache *s) return name; } -static int sysfs_slab_add(struct kmem_cache *s) +int sysfs_slab_add(struct kmem_cache *s) { int err; const char *name; @@ -6236,10 +6204,13 @@ static const struct file_operations slab_debugfs_fops = { .release = slab_debug_trace_release, }; -static void debugfs_slab_add(struct kmem_cache *s) +void debugfs_slab_add(struct kmem_cache *s) { struct dentry *slab_cache_dir; + if (!(s->flags & SLAB_STORE_USER)) + return; + if (unlikely(!slab_debugfs_root)) return; @@ -6264,8 +6235,7 @@ static int __init slab_debugfs_init(void) slab_debugfs_root = debugfs_create_dir("slab", NULL); list_for_each_entry(s, &slab_caches, list) - if (s->flags & SLAB_STORE_USER) - debugfs_slab_add(s); + debugfs_slab_add(s); return 0; From patchwork Mon Oct 31 13:47:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liu Shixin X-Patchwork-Id: 13025836 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31987C38A02 for ; Mon, 31 Oct 2022 13:00:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C6DCB6B007B; Mon, 31 Oct 2022 09:00:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C1D656B007D; Mon, 31 Oct 2022 09:00:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B0CB56B007E; Mon, 31 Oct 2022 09:00:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 8BCDE6B007B for ; Mon, 31 Oct 2022 09:00:01 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 444D6AB1CF for ; Mon, 31 Oct 2022 13:00:01 +0000 (UTC) X-FDA: 80081252202.18.E9600C1 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf05.hostedemail.com (Postfix) with ESMTP id EBAE610003E for ; Mon, 31 Oct 2022 12:59:59 +0000 (UTC) Received: from dggpemm500020.china.huawei.com (unknown [172.30.72.54]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4N1CmH4crQz15MH9; Mon, 31 Oct 2022 20:54:55 +0800 (CST) Received: from dggpemm100009.china.huawei.com (7.185.36.113) by dggpemm500020.china.huawei.com (7.185.36.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 31 Oct 2022 20:59:37 +0800 Received: from huawei.com (10.175.113.32) by dggpemm100009.china.huawei.com (7.185.36.113) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 31 Oct 2022 20:59:36 +0800 From: Liu Shixin To: Christoph Lameter , Pekka Enberg , "David Rientjes" , Joonsoo Kim , "Andrew Morton" , Vlastimil Babka , "Roman Gushchin" , Hyeonggon Yoo <42.hyeyoo@gmail.com> CC: , , Liu Shixin Subject: [PATCH v2 3/3] mm/slub: Fix memory leak of kobj->name in sysfs_slab_add() Date: Mon, 31 Oct 2022 21:47:47 +0800 Message-ID: <20221031134747.3049593-4-liushixin2@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221031134747.3049593-1-liushixin2@huawei.com> References: <20221031134747.3049593-1-liushixin2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.32] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm100009.china.huawei.com (7.185.36.113) X-CFilter-Loop: Reflected ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1667221200; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/dGSpmm89CdCJt/agDJjSPiUY7DauCBMXOVgu8Gkkuo=; b=lmpuOkiy+2UjrKJ4EQNeYjJSnDHgTktZoPTaWa2b0LguXImEGKz8vYMBDtPzHfj7Vyl8vn S7WKtBGmcxtcTSgvBJdjoDOhP6etqd6hzRI6qnxbQ4j11V724RjCjnkgNwmaAlFpsQTQ9o Bty+1LTAzRJlZHEIpqFU+Z0GcyLN9Mw= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf05.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=liushixin2@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1667221200; a=rsa-sha256; cv=none; b=bNaXmMq2mpfEZ57aXco64l4CVnaHIDKqZHroPenYSdmEeePDSCKHO7Sq/JC6G3fJfh69gG mQjeN4HoTxBmwCeCIe48c2cAN8F6zU25jh4GJkkOzQkb1bPefV/oZKbuwVW1u10lzRy4rS JSQ2gSE7/nB5bVAicZw8bXkLx1h4aDA= X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: EBAE610003E X-Rspam-User: Authentication-Results: imf05.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf05.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=liushixin2@huawei.com X-Stat-Signature: 195ue1cf9tjrnocsyjo4zkf71ddcrb13 X-HE-Tag: 1667221199-111344 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There is a memory leak of kobj->name in sysfs_slab_add(): unreferenced object 0xffff88817e446440 (size 32): comm "insmod", pid 4085, jiffies 4296564501 (age 126.272s) hex dump (first 32 bytes): 75 62 69 66 73 5f 69 6e 6f 64 65 5f 73 6c 61 62 ubifs_inode_slab 00 65 44 7e 81 88 ff ff 00 00 00 00 00 00 00 00 .eD~............ backtrace: [<000000005b30fbbd>] __kmalloc_node_track_caller+0x4e/0x150 [<000000002f70da0c>] kstrdup_const+0x4b/0x80 [<00000000c6712c61>] kobject_set_name_vargs+0x2f/0xb0 [<00000000b151218e>] kobject_init_and_add+0xb0/0x120 [<00000000e56a4cf5>] sysfs_slab_add+0x17d/0x220 [<000000009326fd57>] __kmem_cache_create+0x406/0x590 [<00000000dde33cff>] kmem_cache_create_usercopy+0x1fc/0x300 [<00000000fe90cedb>] kmem_cache_create+0x12/0x20 [<000000007a6531c8>] 0xffffffffa02d802d [<000000000e3b13c7>] do_one_initcall+0x87/0x2a0 [<00000000995ecdcf>] do_init_module+0xdf/0x320 [<000000008821941f>] load_module+0x2f98/0x3330 [<00000000ef51efa4>] __do_sys_finit_module+0x113/0x1b0 [<000000009339fbce>] do_syscall_64+0x35/0x80 [<000000006b7f2033>] entry_SYSCALL_64_after_hwframe+0x46/0xb0 Following the rules stated in the comment for kobject_init_and_add(): If this function returns an error, kobject_put() must be called to properly clean up the memory associated with the object. kobject_put() is more appropriate for error handling after kobject_init(). And we can use this function to solve this problem. Fixes: 80da026a8e5d ("mm/slub: fix slab double-free in case of duplicate sysfs filename") Signed-off-by: Liu Shixin --- mm/slab_common.c | 4 +--- mm/slub.c | 8 ++++++-- 2 files changed, 7 insertions(+), 5 deletions(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 55e2cf064dfe..9337724b5c76 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -238,10 +238,8 @@ static struct kmem_cache *create_cache(const char *name, /* Mutex is not taken during early boot */ if (slab_state >= FULL) { err = sysfs_slab_add(s); - if (err) { - slab_kmem_cache_release(s); + if (err) return ERR_PTR(err); - } debugfs_slab_add(s); } #endif diff --git a/mm/slub.c b/mm/slub.c index a1ad759753ce..f8883bc642b8 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5911,14 +5911,16 @@ int sysfs_slab_add(struct kmem_cache *s) * for the symlinks. */ name = create_unique_id(s); - if (IS_ERR(name)) + if (IS_ERR(name)) { + slab_kmem_cache_release(s); return PTR_ERR(name); + } } s->kobj.kset = kset; err = kobject_init_and_add(&s->kobj, &slab_ktype, NULL, "%s", name); if (err) - goto out; + goto out_put_kobj; err = sysfs_create_group(&s->kobj, &slab_attr_group); if (err) @@ -5934,6 +5936,8 @@ int sysfs_slab_add(struct kmem_cache *s) return err; out_del_kobj: kobject_del(&s->kobj); +out_put_kobj: + kobject_put(&s->kobj); goto out; }