From patchwork Fri Apr 24 15:12:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 11508087 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2CF5092C for ; Fri, 24 Apr 2020 15:12:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EE9B920767 for ; Fri, 24 Apr 2020 15:12:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="EMqc21TU" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EE9B920767 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2B8BB8E0006; Fri, 24 Apr 2020 11:12:56 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2688A8E0003; Fri, 24 Apr 2020 11:12:56 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1A74D8E0006; Fri, 24 Apr 2020 11:12:56 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0163.hostedemail.com [216.40.44.163]) by kanga.kvack.org (Postfix) with ESMTP id 014BA8E0003 for ; Fri, 24 Apr 2020 11:12:55 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B7E368248076 for ; Fri, 24 Apr 2020 15:12:55 +0000 (UTC) X-FDA: 76743091110.29.arm10_1986e3897e828 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,longman@redhat.com,,RULES_HIT:30054:30070,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: arm10_1986e3897e828 X-Filterd-Recvd-Size: 4621 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Fri, 24 Apr 2020 15:12:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587741173; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc; bh=bG+oOpvHV7fdGOI3xpfAnUa5ETjwSHs1N5A5bl4B/qU=; b=EMqc21TUdny+aL9ThLaUvXyo8yElCMDcyfRQSojMHCfWo61yDn9kYKoTpsT4+JFHVqxaw6 3tJSHSNc4O8Dd0EwMmdlxAFPiUx3bjIucW5w2uA02jJ6MEC3ZsdAK8l2+tVqVSb7x0c3SB vfWB3qWerrOrvwSoqpFjgLYOncTzP6E= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-362-gN5GtHAxOB2uwdFK9C4rvg-1; Fri, 24 Apr 2020 11:12:51 -0400 X-MC-Unique: gN5GtHAxOB2uwdFK9C4rvg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0E86645F; Fri, 24 Apr 2020 15:12:49 +0000 (UTC) Received: from llong.com (ovpn-112-86.rdu2.redhat.com [10.10.112.86]) by smtp.corp.redhat.com (Postfix) with ESMTP id 77294600F5; Fri, 24 Apr 2020 15:12:38 +0000 (UTC) From: Waiman Long To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Juri Lelli , Waiman Long Subject: [PATCH 1/2] mm, slab: Revert "extend slab/shrink to shrink all memcg caches" Date: Fri, 24 Apr 2020 11:12:24 -0400 Message-Id: <20200424151225.10966-1-longman@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When the slub shrink sysfs file is written into, the function call sequence is as follows: kernfs_fop_write => slab_attr_store => shrink_store => kmem_cache_shrink_all It turns out that doing a memcg cache scan in kmem_cache_shrink_all() is redundant as the same memcg cache scan is being done in slab_attr_store(). So revert the commit 04f768a39d55 ("mm, slab: extend slab/shrink to shrink all memcg caches") except the documentation change which is still valid. Signed-off-by: Waiman Long --- mm/slab.h | 1 - mm/slab_common.c | 37 ------------------------------------- mm/slub.c | 2 +- 3 files changed, 1 insertion(+), 39 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 207c83ef6e06..0937cb2ae8aa 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -237,7 +237,6 @@ int __kmem_cache_shrink(struct kmem_cache *); void __kmemcg_cache_deactivate(struct kmem_cache *s); void __kmemcg_cache_deactivate_after_rcu(struct kmem_cache *s); void slab_kmem_cache_release(struct kmem_cache *); -void kmem_cache_shrink_all(struct kmem_cache *s); struct seq_file; struct file; diff --git a/mm/slab_common.c b/mm/slab_common.c index 23c7500eea7d..2e367ab8c15c 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -995,43 +995,6 @@ int kmem_cache_shrink(struct kmem_cache *cachep) } EXPORT_SYMBOL(kmem_cache_shrink); -/** - * kmem_cache_shrink_all - shrink a cache and all memcg caches for root cache - * @s: The cache pointer - */ -void kmem_cache_shrink_all(struct kmem_cache *s) -{ - struct kmem_cache *c; - - if (!IS_ENABLED(CONFIG_MEMCG_KMEM) || !is_root_cache(s)) { - kmem_cache_shrink(s); - return; - } - - get_online_cpus(); - get_online_mems(); - kasan_cache_shrink(s); - __kmem_cache_shrink(s); - - /* - * We have to take the slab_mutex to protect from the memcg list - * modification. - */ - mutex_lock(&slab_mutex); - for_each_memcg_cache(c, s) { - /* - * Don't need to shrink deactivated memcg caches. - */ - if (s->flags & SLAB_DEACTIVATED) - continue; - kasan_cache_shrink(c); - __kmem_cache_shrink(c); - } - mutex_unlock(&slab_mutex); - put_online_mems(); - put_online_cpus(); -} - bool slab_is_available(void) { return slab_state >= UP; diff --git a/mm/slub.c b/mm/slub.c index 9bf44955c4f1..183ccc364ccf 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5343,7 +5343,7 @@ static ssize_t shrink_store(struct kmem_cache *s, const char *buf, size_t length) { if (buf[0] == '1') - kmem_cache_shrink_all(s); + kmem_cache_shrink(s); else return -EINVAL; return length; From patchwork Fri Apr 24 15:12:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 11508089 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A7C9F92C for ; Fri, 24 Apr 2020 15:13:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 681DD2087E for ; Fri, 24 Apr 2020 15:13:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ESGh75L2" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 681DD2087E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 85E008E0007; Fri, 24 Apr 2020 11:13:00 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 80E3B8E0003; Fri, 24 Apr 2020 11:13:00 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 723E78E0007; Fri, 24 Apr 2020 11:13:00 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0250.hostedemail.com [216.40.44.250]) by kanga.kvack.org (Postfix) with ESMTP id 5A3C78E0003 for ; Fri, 24 Apr 2020 11:13:00 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 17808181AC9B6 for ; Fri, 24 Apr 2020 15:13:00 +0000 (UTC) X-FDA: 76743091320.14.fly28_1a247ca50e55a X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,longman@redhat.com,,RULES_HIT:30012:30029:30054:30056:30070:30075,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: fly28_1a247ca50e55a X-Filterd-Recvd-Size: 10312 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [207.211.31.81]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Fri, 24 Apr 2020 15:12:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587741179; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=3TawSClUSkktnAbZ4czO7upRNKNz/6iSHhjsC2PyvJo=; b=ESGh75L25HNhS7UfWyfB3ZkaMuhQ+tNATjDEspKCVytI9jOwsOZ5ciuBTCKswm72URmUyY EgpXIToBmM5U8WIOEjW0R3aKxhH/PwMHWOSVDu1j1NlYCb791zCQ+AYxyZKxhAaDMOuzhL c8s9R5UFUpCBVuK/bTmTCA45lzWZoSk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-45-Y1U76kpLOXennWk_6zxV5Q-1; Fri, 24 Apr 2020 11:12:57 -0400 X-MC-Unique: Y1U76kpLOXennWk_6zxV5Q-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E441E872FE1; Fri, 24 Apr 2020 15:12:55 +0000 (UTC) Received: from llong.com (ovpn-112-86.rdu2.redhat.com [10.10.112.86]) by smtp.corp.redhat.com (Postfix) with ESMTP id 39D9A605D1; Fri, 24 Apr 2020 15:12:49 +0000 (UTC) From: Waiman Long To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Juri Lelli , Waiman Long Subject: [PATCH 2/2] mm/slub: Fix slab_mutex circular locking problem in slab_attr_store() Date: Fri, 24 Apr 2020 11:12:25 -0400 Message-Id: <20200424151225.10966-2-longman@redhat.com> In-Reply-To: <20200424151225.10966-1-longman@redhat.com> References: <20200424151225.10966-1-longman@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The following lockdep splat was reported: [ 176.241923] ====================================================== [ 176.241924] WARNING: possible circular locking dependency detected [ 176.241926] 4.18.0-172.rt13.29.el8.x86_64+debug #1 Not tainted [ 176.241927] ------------------------------------------------------ [ 176.241929] slub_cpu_partia/5371 is trying to acquire lock: [ 176.241930] ffffffffa0b83718 (slab_mutex){+.+.}, at: slab_attr_store+0x6b/0xe0 [ 176.241941] but task is already holding lock: [ 176.241942] ffff88bb6d8b83c8 (kn->count#103){++++}, at: kernfs_fop_write+0x1cc/0x400 [ 176.241947] which lock already depends on the new lock. [ 176.241949] the existing dependency chain (in reverse order) is: [ 176.241949] -> #1 (kn->count#103){++++}: [ 176.241955] __kernfs_remove+0x616/0x800 [ 176.241957] kernfs_remove_by_name_ns+0x3e/0x80 [ 176.241959] sysfs_slab_add+0x1c6/0x330 [ 176.241961] __kmem_cache_create+0x15f/0x1b0 [ 176.241964] create_cache+0xe1/0x220 [ 176.241966] kmem_cache_create_usercopy+0x1a3/0x260 [ 176.241967] kmem_cache_create+0x12/0x20 [ 176.242076] mlx5_init_fs+0x18d/0x1a00 [mlx5_core] [ 176.242100] mlx5_load_one+0x3b4/0x1730 [mlx5_core] [ 176.242124] init_one+0x901/0x11b0 [mlx5_core] [ 176.242127] local_pci_probe+0xd4/0x180 [ 176.242131] work_for_cpu_fn+0x51/0xa0 [ 176.242133] process_one_work+0x91a/0x1ac0 [ 176.242134] worker_thread+0x536/0xb40 [ 176.242136] kthread+0x30c/0x3d0 [ 176.242140] ret_from_fork+0x27/0x50 [ 176.242140] -> #0 (slab_mutex){+.+.}: [ 176.242145] __lock_acquire+0x22cb/0x48c0 [ 176.242146] lock_acquire+0x134/0x4c0 [ 176.242148] _mutex_lock+0x28/0x40 [ 176.242150] slab_attr_store+0x6b/0xe0 [ 176.242151] kernfs_fop_write+0x251/0x400 [ 176.242154] vfs_write+0x157/0x460 [ 176.242155] ksys_write+0xb8/0x170 [ 176.242158] do_syscall_64+0x13c/0x710 [ 176.242160] entry_SYSCALL_64_after_hwframe+0x6a/0xdf [ 176.242161] other info that might help us debug this: [ 176.242161] Possible unsafe locking scenario: [ 176.242162] CPU0 CPU1 [ 176.242163] ---- ---- [ 176.242163] lock(kn->count#103); [ 176.242165] lock(slab_mutex); [ 176.242166] lock(kn->count#103); [ 176.242167] lock(slab_mutex); [ 176.242169] *** DEADLOCK *** [ 176.242170] 3 locks held by slub_cpu_partia/5371: [ 176.242170] #0: ffff888705e3a800 (sb_writers#4){.+.+}, at: vfs_write+0x31c/0x460 [ 176.242174] #1: ffff889aeec4d658 (&of->mutex){+.+.}, at: kernfs_fop_write+0x1a9/0x400 [ 176.242177] #2: ffff88bb6d8b83c8 (kn->count#103){++++}, at: kernfs_fop_write+0x1cc/0x400 [ 176.242180] stack backtrace: [ 176.242183] CPU: 36 PID: 5371 Comm: slub_cpu_partia Not tainted 4.18.0-172.rt13.29.el8.x86_64+debug #1 [ 176.242184] Hardware name: AMD Corporation DAYTONA_X/DAYTONA_X, BIOS RDY1005C 11/22/2019 [ 176.242185] Call Trace: [ 176.242190] dump_stack+0x9a/0xf0 [ 176.242193] check_noncircular+0x317/0x3c0 [ 176.242195] ? print_circular_bug+0x1e0/0x1e0 [ 176.242199] ? native_sched_clock+0x32/0x1e0 [ 176.242202] ? sched_clock+0x5/0x10 [ 176.242205] ? sched_clock_cpu+0x238/0x340 [ 176.242208] __lock_acquire+0x22cb/0x48c0 [ 176.242213] ? trace_hardirqs_on+0x10/0x10 [ 176.242215] ? trace_hardirqs_on+0x10/0x10 [ 176.242218] lock_acquire+0x134/0x4c0 [ 176.242220] ? slab_attr_store+0x6b/0xe0 [ 176.242223] _mutex_lock+0x28/0x40 [ 176.242225] ? slab_attr_store+0x6b/0xe0 [ 176.242227] slab_attr_store+0x6b/0xe0 [ 176.242229] ? sysfs_file_ops+0x160/0x160 [ 176.242230] kernfs_fop_write+0x251/0x400 [ 176.242232] ? __sb_start_write+0x26a/0x3f0 [ 176.242234] vfs_write+0x157/0x460 [ 176.242237] ksys_write+0xb8/0x170 [ 176.242239] ? __ia32_sys_read+0xb0/0xb0 [ 176.242242] ? do_syscall_64+0xb9/0x710 [ 176.242245] do_syscall_64+0x13c/0x710 [ 176.242247] entry_SYSCALL_64_after_hwframe+0x6a/0xdf There was another lockdep splat generated by echoing "1" to "/sys/kernel/slab/fs_cache/shrink": [ 445.231443] Chain exists of: cpu_hotplug_lock --> mem_hotplug_lock --> slab_mutex [ 445.242025] Possible unsafe locking scenario: [ 445.247977] CPU0 CPU1 [ 445.252529] ---- ---- [ 445.257082] lock(slab_mutex); [ 445.260239] lock(mem_hotplug_lock); [ 445.266452] lock(slab_mutex); [ 445.272141] lock(cpu_hotplug_lock); So it is problematic to use slab_mutex to iterate the list of child memcgs with for_each_memcg_cache(). Fortunately, there is another way to do child memcg iteration by going through the array entries in memcg_params.memcg_caches while holding a read lock on memcg_cache_ids_sem. To avoid other possible circular locking problems, we only take a reference to the child memcgs and store their addresses while holding memcg_cache_ids_sem. The actual store method is called for each of the child memcgs after releasing the lock. Signed-off-by: Waiman Long --- mm/slub.c | 56 +++++++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 48 insertions(+), 8 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 183ccc364ccf..255981180489 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5567,13 +5567,30 @@ static ssize_t slab_attr_store(struct kobject *kobj, return -EIO; err = attribute->store(s, buf, len); -#ifdef CONFIG_MEMCG - if (slab_state >= FULL && err >= 0 && is_root_cache(s)) { - struct kmem_cache *c; +#ifdef CONFIG_MEMCG_KMEM + if (slab_state >= FULL && err >= 0 && is_root_cache(s) && + !list_empty(&s->memcg_params.children)) { + struct kmem_cache *c, **pcaches; + int idx, max, cnt = 0; + size_t size = s->max_attr_size; + struct memcg_cache_array *arr; + + /* + * Make atomic update to s->max_attr_size. + */ + do { + if (len <= size) + break; + } while (!try_cmpxchg(&s->max_attr_size, &size, len)); - mutex_lock(&slab_mutex); - if (s->max_attr_size < len) - s->max_attr_size = len; + memcg_get_cache_ids(); + max = memcg_nr_cache_ids; + + pcaches = kmalloc_array(max, sizeof(void *), GFP_KERNEL); + if (!pcaches) { + memcg_put_cache_ids(); + return -ENOMEM; + } /* * This is a best effort propagation, so this function's return @@ -5591,10 +5608,33 @@ static ssize_t slab_attr_store(struct kobject *kobj, * has well defined semantics. The cache being written to * directly either failed or succeeded, in which case we loop * through the descendants with best-effort propagation. + * + * To avoid potential circular lock dependency problems, we + * just get a reference and store child cache pointers while + * holding the memcg_cache_ids_sem read lock. The store + * method is then called for each child cache after releasing + * the lock. Code sequence partly borrowed from + * memcg_kmem_get_cache(). */ - for_each_memcg_cache(c, s) + rcu_read_lock(); + arr = rcu_dereference(s->memcg_params.memcg_caches); + for (idx = 0; idx < max; idx++) { + c = READ_ONCE(arr->entries[idx]); + if (!c) + continue; + if (!percpu_ref_tryget(&c->memcg_params.refcnt)) + continue; + pcaches[cnt++] = c; + } + rcu_read_unlock(); + memcg_put_cache_ids(); + + for (idx = 0; idx < cnt; idx++) { + c = pcaches[idx]; attribute->store(c, buf, len); - mutex_unlock(&slab_mutex); + percpu_ref_put(&c->memcg_params.refcnt); + } + kfree(pcaches); } #endif return err;