Message ID | 1498027155-4456-1-git-send-email-stummala@codeaurora.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Wed, Jun 21, 2017 at 12:09:15PM +0530, Sahitya Tummala wrote: > __list_lru_walk_one() acquires nlru spin lock (nlru->lock) for > longer duration if there are more number of items in the lru list. > As per the current code, it can hold the spin lock for upto maximum > UINT_MAX entries at a time. So if there are more number of items in > the lru list, then "BUG: spinlock lockup suspected" is observed in > the below path - > > [<ffffff8eca0fb0bc>] spin_bug+0x90 > [<ffffff8eca0fb220>] do_raw_spin_lock+0xfc > [<ffffff8ecafb7798>] _raw_spin_lock+0x28 > [<ffffff8eca1ae884>] list_lru_add+0x28 > [<ffffff8eca1f5dac>] dput+0x1c8 > [<ffffff8eca1eb46c>] path_put+0x20 > [<ffffff8eca1eb73c>] terminate_walk+0x3c > [<ffffff8eca1eee58>] path_lookupat+0x100 > [<ffffff8eca1f00fc>] filename_lookup+0x6c > [<ffffff8eca1f0264>] user_path_at_empty+0x54 > [<ffffff8eca1e066c>] SyS_faccessat+0xd0 > [<ffffff8eca084e30>] el0_svc_naked+0x24 > > This nlru->lock is acquired by another CPU in this path - > > [<ffffff8eca1f5fd0>] d_lru_shrink_move+0x34 > [<ffffff8eca1f6180>] dentry_lru_isolate_shrink+0x48 > [<ffffff8eca1aeafc>] __list_lru_walk_one.isra.10+0x94 > [<ffffff8eca1aec34>] list_lru_walk_node+0x40 > [<ffffff8eca1f6620>] shrink_dcache_sb+0x60 > [<ffffff8eca1e56a8>] do_remount_sb+0xbc > [<ffffff8eca1e583c>] do_emergency_remount+0xb0 > [<ffffff8eca0ba510>] process_one_work+0x228 > [<ffffff8eca0bb158>] worker_thread+0x2e0 > [<ffffff8eca0c040c>] kthread+0xf4 > [<ffffff8eca084dd0>] ret_from_fork+0x10 > > Fix this lockup by reducing the number of entries to be shrinked > from the lru list to 1024 at once. Also, add cond_resched() before > processing the lru list again. > > Link: http://marc.info/?t=149722864900001&r=1&w=2 > Fix-suggested-by: Jan kara <jack@suse.cz> > Fix-suggested-by: Vladimir Davydov <vdavydov.dev@gmail.com> > Signed-off-by: Sahitya Tummala <stummala@codeaurora.org> > --- > v2: patch shrink_dcache_sb() instead of list_lru_walk() > --- > fs/dcache.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/fs/dcache.c b/fs/dcache.c > index cddf397..c8ca150 100644 > --- a/fs/dcache.c > +++ b/fs/dcache.c > @@ -1133,10 +1133,11 @@ void shrink_dcache_sb(struct super_block *sb) > LIST_HEAD(dispose); > > freed = list_lru_walk(&sb->s_dentry_lru, > - dentry_lru_isolate_shrink, &dispose, UINT_MAX); > + dentry_lru_isolate_shrink, &dispose, 1024); > > this_cpu_sub(nr_dentry_unused, freed); > shrink_dentry_list(&dispose); > + cond_resched(); > } while (freed > 0); In an extreme case, a single invocation of list_lru_walk() can skip all 1024 dentries, in which case 'freed' will be 0 forcing us to break the loop prematurely. I think we should loop until there's at least one dentry left on the LRU, i.e. while (list_lru_count(&sb->s_dentry_lru) > 0) However, even that wouldn't be quite correct, because list_lru_count() iterates over all memory cgroups to sum list_lru_one->nr_items, which can race with memcg offlining code migrating dentries off a dead cgroup (see memcg_drain_all_list_lrus()). So it looks like to make this check race-free, we need to account the number of entries on the LRU not only per memcg, but also per node, i.e. add list_lru_node->nr_items. Fortunately, list_lru entries can't be migrated between NUMA nodes. > } > EXPORT_SYMBOL(shrink_dcache_sb);
On 6/21/2017 10:01 PM, Vladimir Davydov wrote: > >> index cddf397..c8ca150 100644 >> --- a/fs/dcache.c >> +++ b/fs/dcache.c >> @@ -1133,10 +1133,11 @@ void shrink_dcache_sb(struct super_block *sb) >> LIST_HEAD(dispose); >> >> freed = list_lru_walk(&sb->s_dentry_lru, >> - dentry_lru_isolate_shrink, &dispose, UINT_MAX); >> + dentry_lru_isolate_shrink, &dispose, 1024); >> >> this_cpu_sub(nr_dentry_unused, freed); >> shrink_dentry_list(&dispose); >> + cond_resched(); >> } while (freed > 0); > In an extreme case, a single invocation of list_lru_walk() can skip all > 1024 dentries, in which case 'freed' will be 0 forcing us to break the > loop prematurely. I think we should loop until there's at least one > dentry left on the LRU, i.e. > > while (list_lru_count(&sb->s_dentry_lru) > 0) > > However, even that wouldn't be quite correct, because list_lru_count() > iterates over all memory cgroups to sum list_lru_one->nr_items, which > can race with memcg offlining code migrating dentries off a dead cgroup > (see memcg_drain_all_list_lrus()). So it looks like to make this check > race-free, we need to account the number of entries on the LRU not only > per memcg, but also per node, i.e. add list_lru_node->nr_items. > Fortunately, list_lru entries can't be migrated between NUMA nodes. It looks like list_lru_count() is iterating per node before iterating over all memory cgroups as below - unsigned long list_lru_count_node(struct list_lru *lru, int nid) { long count = 0; int memcg_idx; count += __list_lru_count_one(lru, nid, -1); if (list_lru_memcg_aware(lru)) { for_each_memcg_cache_index(memcg_idx) count += __list_lru_count_one(lru, nid, memcg_idx); } return count; } The first call to __list_lru_count_one() is iterating all the items per node i.e, nlru->lru->nr_items. Is my understanding correct? If not, could you please clarify on how to get the lru items per node?
On Thu, Jun 22, 2017 at 10:01:39PM +0530, Sahitya Tummala wrote: > > > On 6/21/2017 10:01 PM, Vladimir Davydov wrote: > > > >>index cddf397..c8ca150 100644 > >>--- a/fs/dcache.c > >>+++ b/fs/dcache.c > >>@@ -1133,10 +1133,11 @@ void shrink_dcache_sb(struct super_block *sb) > >> LIST_HEAD(dispose); > >> freed = list_lru_walk(&sb->s_dentry_lru, > >>- dentry_lru_isolate_shrink, &dispose, UINT_MAX); > >>+ dentry_lru_isolate_shrink, &dispose, 1024); > >> this_cpu_sub(nr_dentry_unused, freed); > >> shrink_dentry_list(&dispose); > >>+ cond_resched(); > >> } while (freed > 0); > >In an extreme case, a single invocation of list_lru_walk() can skip all > >1024 dentries, in which case 'freed' will be 0 forcing us to break the > >loop prematurely. I think we should loop until there's at least one > >dentry left on the LRU, i.e. > > > > while (list_lru_count(&sb->s_dentry_lru) > 0) > > > >However, even that wouldn't be quite correct, because list_lru_count() > >iterates over all memory cgroups to sum list_lru_one->nr_items, which > >can race with memcg offlining code migrating dentries off a dead cgroup > >(see memcg_drain_all_list_lrus()). So it looks like to make this check > >race-free, we need to account the number of entries on the LRU not only > >per memcg, but also per node, i.e. add list_lru_node->nr_items. > >Fortunately, list_lru entries can't be migrated between NUMA nodes. > It looks like list_lru_count() is iterating per node before iterating over > all memory > cgroups as below - > > unsigned long list_lru_count_node(struct list_lru *lru, int nid) > { > long count = 0; > int memcg_idx; > > count += __list_lru_count_one(lru, nid, -1); > if (list_lru_memcg_aware(lru)) { > for_each_memcg_cache_index(memcg_idx) > count += __list_lru_count_one(lru, nid, memcg_idx); > } > return count; > } > > The first call to __list_lru_count_one() is iterating all the items per node > i.e, nlru->lru->nr_items. lru->node[nid].lru.nr_items returned by __list_lru_count_one(lru, nid, -1) only counts items accounted to the root cgroup, not the total number of entries on the node. > Is my understanding correct? If not, could you please clarify on how to get > the lru items per node? What I mean is iterating over list_lru_node->memcg_lrus to count the number of entries on the node is racy. For example, suppose you have three cgroups with the following values of list_lru_one->nr_items: 0 0 10 While list_lru_count_node() is at #1, cgroup #2 is offlined and its list_lru_one is drained, i.e. its entries are migrated to the parent cgroup, which happens to be #0, i.e. we see the following picture: 10 0 0 ^^^ memcg_ids points here in list_lru_count_node() Then the count returned by list_lru_count_node() will be 0, although there are still 10 entries on the list. To avoid this race, we could keep list_lru_node->lock locked while walking over list_lru_node->memcg_lrus, but that's too heavy. I'd prefer adding list_lru_node->nr_count which would be equal to the total number of list_lru entries on the node, i.e. sum of list_lru_node->lru.nr_lrus and list_lru_node->memcg_lrus->lru[]->nr_items.
diff --git a/fs/dcache.c b/fs/dcache.c index cddf397..c8ca150 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -1133,10 +1133,11 @@ void shrink_dcache_sb(struct super_block *sb) LIST_HEAD(dispose); freed = list_lru_walk(&sb->s_dentry_lru, - dentry_lru_isolate_shrink, &dispose, UINT_MAX); + dentry_lru_isolate_shrink, &dispose, 1024); this_cpu_sub(nr_dentry_unused, freed); shrink_dentry_list(&dispose); + cond_resched(); } while (freed > 0); } EXPORT_SYMBOL(shrink_dcache_sb);
__list_lru_walk_one() acquires nlru spin lock (nlru->lock) for longer duration if there are more number of items in the lru list. As per the current code, it can hold the spin lock for upto maximum UINT_MAX entries at a time. So if there are more number of items in the lru list, then "BUG: spinlock lockup suspected" is observed in the below path - [<ffffff8eca0fb0bc>] spin_bug+0x90 [<ffffff8eca0fb220>] do_raw_spin_lock+0xfc [<ffffff8ecafb7798>] _raw_spin_lock+0x28 [<ffffff8eca1ae884>] list_lru_add+0x28 [<ffffff8eca1f5dac>] dput+0x1c8 [<ffffff8eca1eb46c>] path_put+0x20 [<ffffff8eca1eb73c>] terminate_walk+0x3c [<ffffff8eca1eee58>] path_lookupat+0x100 [<ffffff8eca1f00fc>] filename_lookup+0x6c [<ffffff8eca1f0264>] user_path_at_empty+0x54 [<ffffff8eca1e066c>] SyS_faccessat+0xd0 [<ffffff8eca084e30>] el0_svc_naked+0x24 This nlru->lock is acquired by another CPU in this path - [<ffffff8eca1f5fd0>] d_lru_shrink_move+0x34 [<ffffff8eca1f6180>] dentry_lru_isolate_shrink+0x48 [<ffffff8eca1aeafc>] __list_lru_walk_one.isra.10+0x94 [<ffffff8eca1aec34>] list_lru_walk_node+0x40 [<ffffff8eca1f6620>] shrink_dcache_sb+0x60 [<ffffff8eca1e56a8>] do_remount_sb+0xbc [<ffffff8eca1e583c>] do_emergency_remount+0xb0 [<ffffff8eca0ba510>] process_one_work+0x228 [<ffffff8eca0bb158>] worker_thread+0x2e0 [<ffffff8eca0c040c>] kthread+0xf4 [<ffffff8eca084dd0>] ret_from_fork+0x10 Fix this lockup by reducing the number of entries to be shrinked from the lru list to 1024 at once. Also, add cond_resched() before processing the lru list again. Link: http://marc.info/?t=149722864900001&r=1&w=2 Fix-suggested-by: Jan kara <jack@suse.cz> Fix-suggested-by: Vladimir Davydov <vdavydov.dev@gmail.com> Signed-off-by: Sahitya Tummala <stummala@codeaurora.org> --- v2: patch shrink_dcache_sb() instead of list_lru_walk() --- fs/dcache.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)