diff mbox

[Bug,192981] New: page allocation stalls

Message ID 8f450abd-4e05-92d3-2533-72b05fea2012@beget.ru (mailing list archive)
State New, archived
Headers show

Commit Message

Alexander Polakov Feb. 15, 2017, 12:56 p.m. UTC
On 01/24/2017 12:51 AM, Andrew Morton wrote:
>
>
> (switched to email.  Please respond via emailed reply-to-all, not via the
> bugzilla web interface).
>
> A 2100 second page allocation stall!
>

I think we finally figured out the problem using Tetsuo Handa's mallocwd 
patch. It seems like it is in XFS direct reclaim path.

Here's how it goes:

memory is low, rsync goes into direct reclaim, locking xfs mutex in 
xfs_reclaim_inodes_nr():

2017-02-14T00:12:59.811447+03:00 storage9 [24646.497290] MemAlloc: 
rsync(19706) flags=0x404840 switches=8692 seq=340 
gfp=0x27080c0(GFP_KERNEL_ACCOUNT|__GFP_ZERO|__GFP_NOTRACK) order=0 
delay=6795
2017-02-14T00:12:59.811447+03:00 storage9 [24646.497550] rsync           R
2017-02-14T00:12:59.811579+03:00 storage9     0 19706   5000 0x00000000
2017-02-14T00:12:59.811579+03:00 storage9 [24646.497690]  ffffa4361dc36c00
2017-02-14T00:12:59.811591+03:00 storage9  0000000c1dc36c00
2017-02-14T00:12:59.811591+03:00 storage9  ffffa4361dc36c00
2017-02-14T00:12:59.811724+03:00 storage9  ffffa44347792580
2017-02-14T00:12:59.811841+03:00 storage9
2017-02-14T00:12:59.811841+03:00 storage9 [24646.497951]  0000000000000000
2017-02-14T00:12:59.811846+03:00 storage9  ffffb1cab6343458
2017-02-14T00:12:59.811885+03:00 storage9  ffffffffb383e799
2017-02-14T00:12:59.811987+03:00 storage9  0000000000000000
2017-02-14T00:12:59.812103+03:00 storage9
2017-02-14T00:12:59.812103+03:00 storage9 [24646.498208]  ffffa443ffff7a00
2017-02-14T00:12:59.812103+03:00 storage9  0000000000000001
2017-02-14T00:12:59.812104+03:00 storage9  ffffb1cab6343448
2017-02-14T00:12:59.812233+03:00 storage9  0000000000000002
2017-02-14T00:12:59.812350+03:00 storage9
2017-02-14T00:12:59.812475+03:00 storage9 [24646.498462] Call Trace:
2017-02-14T00:12:59.812610+03:00 storage9 [24646.498587] 
[<ffffffffb383e799>] ? __schedule+0x179/0x5c8
2017-02-14T00:12:59.812733+03:00 storage9 [24646.498718] 
[<ffffffffb383ecc2>] ? schedule+0x32/0x80
2017-02-14T00:12:59.812869+03:00 storage9 [24646.498846] 
[<ffffffffb3841229>] ? schedule_timeout+0x159/0x2a0
2017-02-14T00:12:59.812997+03:00 storage9 [24646.498977] 
[<ffffffffb30f1450>] ? add_timer_on+0x130/0x130
2017-02-14T00:12:59.813130+03:00 storage9 [24646.499108] 
[<ffffffffb318ff13>] ? __alloc_pages_nodemask+0xe73/0x16b0
2017-02-14T00:12:59.813263+03:00 storage9 [24646.499240] 
[<ffffffffb31deb2a>] ? alloc_pages_current+0x9a/0x120
2017-02-14T00:12:59.813388+03:00 storage9 [24646.499371] 
[<ffffffffb33d1a51>] ? xfs_buf_allocate_memory+0x171/0x2c0
2017-02-14T00:12:59.813564+03:00 storage9 [24646.499503] 
[<ffffffffb337c48b>] ? xfs_buf_get_map+0x18b/0x1d0
2017-02-14T00:12:59.813654+03:00 storage9 [24646.499634] 
[<ffffffffb337cdcb>] ? xfs_buf_read_map+0x3b/0x160
2017-02-14T00:12:59.813783+03:00 storage9 [24646.499765] 
[<ffffffffb33b71b0>] ? xfs_trans_read_buf_map+0x1f0/0x490
2017-02-14T00:12:59.813931+03:00 storage9 [24646.499897] 
[<ffffffffb3362859>] ? xfs_imap_to_bp+0x79/0x120
2017-02-14T00:12:59.814056+03:00 storage9 [24646.500029] 
[<ffffffffb3394a38>] ? xfs_iflush+0x118/0x380
2017-02-14T00:12:59.814196+03:00 storage9 [24646.500158] 
[<ffffffffb30d6130>] ? wake_atomic_t_function+0x40/0x40
2017-02-14T00:12:59.814314+03:00 storage9 [24646.500289] 
[<ffffffffb3385af4>] ? xfs_reclaim_inode+0x274/0x3f0
2017-02-14T00:12:59.814444+03:00 storage9 [24646.500421] 
[<ffffffffb3385e29>] ? xfs_reclaim_inodes_ag+0x1b9/0x2c0
2017-02-14T00:12:59.814573+03:00 storage9 [24646.500553] 
[<ffffffffb345c6b8>] ? radix_tree_next_chunk+0x108/0x2a0
2017-02-14T00:12:59.814701+03:00 storage9 [24646.500685] 
[<ffffffffb30f0e31>] ? lock_timer_base+0x51/0x70
2017-02-14T00:12:59.814846+03:00 storage9 [24646.500814] 
[<ffffffffb345ca4e>] ? radix_tree_gang_lookup_tag+0xae/0x180
2017-02-14T00:12:59.814966+03:00 storage9 [24646.500946] 
[<ffffffffb336f588>] ? xfs_perag_get_tag+0x48/0x100
2017-02-14T00:12:59.815104+03:00 storage9 [24646.501079] 
[<ffffffffb31b63c1>] ? __list_lru_walk_one.isra.7+0x31/0x120
2017-02-14T00:12:59.815231+03:00 storage9 [24646.501212] 
[<ffffffffb322aed0>] ? iget5_locked+0x240/0x240
2017-02-14T00:12:59.815360+03:00 storage9 [24646.501342] 
[<ffffffffb33873c1>] ? xfs_reclaim_inodes_nr+0x31/0x40
2017-02-14T00:12:59.815489+03:00 storage9 [24646.501472] 
[<ffffffffb3213160>] ? super_cache_scan+0x1a0/0x1b0
2017-02-14T00:12:59.815629+03:00 storage9 [24646.501603] 
[<ffffffffb319bf62>] ? shrink_slab+0x262/0x440
2017-02-14T00:12:59.815760+03:00 storage9 [24646.501734] 
[<ffffffffb319cc8b>] ? drop_slab_node+0x2b/0x60
2017-02-14T00:12:59.815891+03:00 storage9 [24646.501864] 
[<ffffffffb319cd02>] ? drop_slab+0x42/0x70
2017-02-14T00:12:59.816020+03:00 storage9 [24646.501994] 
[<ffffffffb318abc0>] ? out_of_memory+0x220/0x560
2017-02-14T00:12:59.816160+03:00 storage9 [24646.502122] 
[<ffffffffb31903b2>] ? __alloc_pages_nodemask+0x1312/0x16b0
2017-02-14T00:12:59.816285+03:00 storage9 [24646.502255] 
[<ffffffffb31deb2a>] ? alloc_pages_current+0x9a/0x120
2017-02-14T00:12:59.816407+03:00 storage9 [24646.502386] 
[<ffffffffb304b803>] ? pte_alloc_one+0x13/0x40
2017-02-14T00:12:59.816536+03:00 storage9 [24646.502517] 
[<ffffffffb31be21f>] ? handle_mm_fault+0xc7f/0x14b0
2017-02-14T00:12:59.816659+03:00 storage9 [24646.502648] 
[<ffffffffb30461ff>] ? __do_page_fault+0x1cf/0x5a0
2017-02-14T00:12:59.816851+03:00 storage9 [24646.502777] 
[<ffffffffb3843362>] ? page_fault+0x22/0x30

But it cannot get memory, because it's low (?). So it stays blocked.

Other processes do the same but they can't get past the mutex in 
xfs_reclaim_inodes_nr():

2017-02-14T00:12:59.817057+03:00 storage9 [24646.502909] MemAlloc: 
rsync(19707) flags=0x404840 switches=6344 seq=638 
gfp=0x24201ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD) order=0 delay=3257 
uninterruptible
2017-02-14T00:12:59.817062+03:00 storage9 [24646.503167] rsync           D
2017-02-14T00:12:59.817189+03:00 storage9     0 19707   5000 0x00000000
2017-02-14T00:12:59.817197+03:00 storage9 [24646.503299]  0000000000000000
2017-02-14T00:12:59.817197+03:00 storage9  ffffa44347793840
2017-02-14T00:12:59.817207+03:00 storage9  ffffa4277a36ec00
2017-02-14T00:12:59.817316+03:00 storage9  ffffa4361dc35100
2017-02-14T00:12:59.817446+03:00 storage9
2017-02-14T00:12:59.817446+03:00 storage9 [24646.503554]  ffffa4437a514300
2017-02-14T00:12:59.817461+03:00 storage9  ffffb1cab0823638
2017-02-14T00:12:59.817461+03:00 storage9  ffffffffb383e799
2017-02-14T00:12:59.817598+03:00 storage9  ffffa44365967b00
2017-02-14T00:12:59.817699+03:00 storage9
2017-02-14T00:12:59.817699+03:00 storage9 [24646.503809]  ffffb1cab0823680
2017-02-14T00:12:59.817717+03:00 storage9  0000000000000001
2017-02-14T00:12:59.817717+03:00 storage9  ffffa443424062a8
2017-02-14T00:12:59.817832+03:00 storage9  0000000000000000
2017-02-14T00:12:59.817948+03:00 storage9
2017-02-14T00:12:59.818078+03:00 storage9 [24646.504061] Call Trace:
2017-02-14T00:12:59.818211+03:00 storage9 [24646.504189] 
[<ffffffffb383e799>] ? __schedule+0x179/0x5c8
2017-02-14T00:12:59.818339+03:00 storage9 [24646.504321] 
[<ffffffffb383ecc2>] ? schedule+0x32/0x80
2017-02-14T00:12:59.818471+03:00 storage9 [24646.504451] 
[<ffffffffb383ee6e>] ? schedule_preempt_disabled+0xe/0x20
2017-02-14T00:12:59.818611+03:00 storage9 [24646.504582] 
[<ffffffffb38400aa>] ? __mutex_lock_slowpath+0x8a/0x100
2017-02-14T00:12:59.818737+03:00 storage9 [24646.504715] 
[<ffffffffb3840133>] ? mutex_lock+0x13/0x22
2017-02-14T00:12:59.818870+03:00 storage9 [24646.504846] 
[<ffffffffb3385e98>] ? xfs_reclaim_inodes_ag+0x228/0x2c0
2017-02-14T00:12:59.819024+03:00 storage9 [24646.504977] 
[<ffffffffb345c6b8>] ? radix_tree_next_chunk+0x108/0x2a0
2017-02-14T00:12:59.819124+03:00 storage9 [24646.505107] 
[<ffffffffb345ca4e>] ? radix_tree_gang_lookup_tag+0xae/0x180
2017-02-14T00:12:59.819273+03:00 storage9 [24646.505238] 
[<ffffffffb336f588>] ? xfs_perag_get_tag+0x48/0x100
2017-02-14T00:12:59.819397+03:00 storage9 [24646.505370] 
[<ffffffffb31b63c1>] ? __list_lru_walk_one.isra.7+0x31/0x120
2017-02-14T00:12:59.819528+03:00 storage9 [24646.505502] 
[<ffffffffb322aed0>] ? iget5_locked+0x240/0x240
2017-02-14T00:12:59.819657+03:00 storage9 [24646.505634] 
[<ffffffffb33873c1>] ? xfs_reclaim_inodes_nr+0x31/0x40
2017-02-14T00:12:59.819781+03:00 storage9 [24646.505766] 
[<ffffffffb3213160>] ? super_cache_scan+0x1a0/0x1b0
2017-02-14T00:12:59.819915+03:00 storage9 [24646.505897] 
[<ffffffffb319bf62>] ? shrink_slab+0x262/0x440
2017-02-14T00:12:59.820100+03:00 storage9 [24646.506028] 
[<ffffffffb319fa6f>] ? shrink_node+0xef/0x2d0
2017-02-14T00:12:59.820176+03:00 storage9 [24646.506158] 
[<ffffffffb319ff42>] ? do_try_to_free_pages+0xc2/0x2b0
2017-02-14T00:12:59.820300+03:00 storage9 [24646.506288] 
[<ffffffffb31a03e2>] ? try_to_free_pages+0xe2/0x1c0
2017-02-14T00:12:59.820442+03:00 storage9 [24646.506415] 
[<ffffffffb318c1a9>] ? __perform_reclaim.isra.80+0x79/0xc0
2017-02-14T00:12:59.820556+03:00 storage9 [24646.506542] 
[<ffffffffb318f901>] ? __alloc_pages_nodemask+0x861/0x16b0
2017-02-14T00:12:59.820683+03:00 storage9 [24646.506669] 
[<ffffffffb345d64b>] ? __radix_tree_lookup+0x7b/0xe0
2017-02-14T00:12:59.820851+03:00 storage9 [24646.506796] 
[<ffffffffb31deb2a>] ? alloc_pages_current+0x9a/0x120
2017-02-14T00:12:59.820942+03:00 storage9 [24646.506922] 
[<ffffffffb3188b56>] ? filemap_fault+0x396/0x540
2017-02-14T00:12:59.821072+03:00 storage9 [24646.507048] 
[<ffffffffb345c6b8>] ? radix_tree_next_chunk+0x108/0x2a0
2017-02-14T00:12:59.821192+03:00 storage9 [24646.507176] 
[<ffffffffb32a5bdf>] ? ext4_filemap_fault+0x3f/0x60
2017-02-14T00:12:59.821318+03:00 storage9 [24646.507303] 
[<ffffffffb31b8591>] ? __do_fault+0x71/0x120
2017-02-14T00:12:59.821441+03:00 storage9 [24646.507428] 
[<ffffffffb31be36d>] ? handle_mm_fault+0xdcd/0x14b0
2017-02-14T00:12:59.821574+03:00 storage9 [24646.507555] 
[<ffffffffb31ea9a4>] ? kmem_cache_free+0x204/0x220
2017-02-14T00:12:59.821697+03:00 storage9 [24646.507682] 
[<ffffffffb32114b9>] ? __fput+0x149/0x200
2017-02-14T00:12:59.821824+03:00 storage9 [24646.507806] 
[<ffffffffb30461ff>] ? __do_page_fault+0x1cf/0x5a0
2017-02-14T00:12:59.821954+03:00 storage9 [24646.507934] 
[<ffffffffb3843362>] ? page_fault+0x22/0x30

Even kswapd gets stuck:

2017-02-14T00:13:10.306351+03:00 storage9 [24656.991375] MemAlloc: 
kswapd0(114) flags=0xa40840 switches=22109 uninterruptible
2017-02-14T00:13:10.306351+03:00 storage9 [24656.991622] kswapd0         D
2017-02-14T00:13:10.306473+03:00 storage9     0   114      2 0x00000000
2017-02-14T00:13:10.306482+03:00 storage9 [24656.991753]  0000000000000000
2017-02-14T00:13:10.306482+03:00 storage9  ffffa43830be0f00
2017-02-14T00:13:10.306488+03:00 storage9  ffffa44342ff2880
2017-02-14T00:13:10.306601+03:00 storage9  ffffa443600e0d80
2017-02-14T00:13:10.306714+03:00 storage9
2017-02-14T00:13:10.306723+03:00 storage9 [24656.992000]  ffffa4437ad14300
2017-02-14T00:13:10.306723+03:00 storage9  ffffb1ca8d2737d8
2017-02-14T00:13:10.306731+03:00 storage9  ffffffffb383e799
2017-02-14T00:13:10.306846+03:00 storage9  ffffb1ca8d273788
2017-02-14T00:13:10.306956+03:00 storage9
2017-02-14T00:13:10.306957+03:00 storage9 [24656.992245]  ffffb1ca8d273788
2017-02-14T00:13:10.306957+03:00 storage9  ffffb1ca8d273798
2017-02-14T00:13:10.307011+03:00 storage9  ffffb1ca8d273798
2017-02-14T00:13:10.307087+03:00 storage9  0000000002400001
2017-02-14T00:13:10.307200+03:00 storage9
2017-02-14T00:13:10.307321+03:00 storage9 [24656.992489] Call Trace:
2017-02-14T00:13:10.307475+03:00 storage9 [24656.992611] 
[<ffffffffb383e799>] ? __schedule+0x179/0x5c8
2017-02-14T00:13:10.307572+03:00 storage9 [24656.992736] 
[<ffffffffb383ecc2>] ? schedule+0x32/0x80
2017-02-14T00:13:10.307703+03:00 storage9 [24656.992861] 
[<ffffffffb3841002>] ? rwsem_down_read_failed+0xb2/0x100
2017-02-14T00:13:10.307831+03:00 storage9 [24656.992987] 
[<ffffffffb3466004>] ? call_rwsem_down_read_failed+0x14/0x30
2017-02-14T00:13:10.307949+03:00 storage9 [24656.993114] 
[<ffffffffb3840443>] ? down_read+0x13/0x30
2017-02-14T00:13:10.308075+03:00 storage9 [24656.993239] 
[<ffffffffb3373420>] ? xfs_map_blocks+0x90/0x2f0
2017-02-14T00:13:10.308214+03:00 storage9 [24656.993366] 
[<ffffffffb3375256>] ? xfs_do_writepage+0x2b6/0x6a0
2017-02-14T00:13:10.308337+03:00 storage9 [24656.993496] 
[<ffffffffb3245589>] ? submit_bh_wbc+0x169/0x200
2017-02-14T00:13:10.308518+03:00 storage9 [24656.993627] 
[<ffffffffb337566c>] ? xfs_vm_writepage+0x2c/0x50
2017-02-14T00:13:10.308603+03:00 storage9 [24656.993760] 
[<ffffffffb319bb0b>] ? pageout.isra.60+0xeb/0x2e0
2017-02-14T00:13:10.308728+03:00 storage9 [24656.993891] 
[<ffffffffb319df06>] ? shrink_page_list+0x736/0xa50
2017-02-14T00:13:10.308935+03:00 storage9 [24656.994022] 
[<ffffffffb319eb52>] ? shrink_inactive_list+0x202/0x4b0
2017-02-14T00:13:10.308993+03:00 storage9 [24656.994152] 
[<ffffffffb319f4d1>] ? shrink_node_memcg+0x2e1/0x790
2017-02-14T00:13:10.309148+03:00 storage9 [24656.994282] 
[<ffffffffb319fa49>] ? shrink_node+0xc9/0x2d0
2017-02-14T00:13:10.309258+03:00 storage9 [24656.994416] 
[<ffffffffb31a0944>] ? kswapd+0x2e4/0x690
2017-02-14T00:13:10.309854+03:00 storage9 [24656.994546] 
[<ffffffffb31a0660>] ? mem_cgroup_shrink_node+0x1a0/0x1a0
2017-02-14T00:13:10.309854+03:00 storage9 [24656.994678] 
[<ffffffffb31a0660>] ? mem_cgroup_shrink_node+0x1a0/0x1a0
2017-02-14T00:13:10.309854+03:00 storage9 [24656.994808] 
[<ffffffffb30b4fc2>] ? kthread+0xc2/0xe0
2017-02-14T00:13:10.309854+03:00 storage9 [24656.994937] 
[<ffffffffb30b4f00>] ? __kthread_init_worker+0xb0/0xb0
2017-02-14T00:13:10.309910+03:00 storage9 [24656.995069] 
[<ffffffffb38425e2>] ? ret_from_fork+0x22/0x30

Which finally leads to "Kernel panic - not syncing: Out of memory and no 
killable processes..." as no process is able to proceed.

I quickly hacked this:



We ran 2 of our machines with this patch for a night, no more 
lockups/stalls were detected.

xfsaild does its work asynchronously, so xfs_inodes don't run wild as 
confirmed by slabtop.

I put netconsole logs here: http://aplkv.beget.tech/lkml/xfs/ for anyone 
interested.

Comments

Brian Foster Feb. 15, 2017, 4:05 p.m. UTC | #1
On Wed, Feb 15, 2017 at 03:56:56PM +0300, Alexander Polakov wrote:
> On 01/24/2017 12:51 AM, Andrew Morton wrote:
> > 
> > 
> > (switched to email.  Please respond via emailed reply-to-all, not via the
> > bugzilla web interface).
> > 
> > A 2100 second page allocation stall!
> > 
> 
> I think we finally figured out the problem using Tetsuo Handa's mallocwd
> patch. It seems like it is in XFS direct reclaim path.
> 
> Here's how it goes:
> 
> memory is low, rsync goes into direct reclaim, locking xfs mutex in
> xfs_reclaim_inodes_nr():
> 
> 2017-02-14T00:12:59.811447+03:00 storage9 [24646.497290] MemAlloc:
> rsync(19706) flags=0x404840 switches=8692 seq=340
> gfp=0x27080c0(GFP_KERNEL_ACCOUNT|__GFP_ZERO|__GFP_NOTRACK) order=0
> delay=6795
> 2017-02-14T00:12:59.811447+03:00 storage9 [24646.497550] rsync           R
> 2017-02-14T00:12:59.811579+03:00 storage9     0 19706   5000 0x00000000
> 2017-02-14T00:12:59.811579+03:00 storage9 [24646.497690]  ffffa4361dc36c00
> 2017-02-14T00:12:59.811591+03:00 storage9  0000000c1dc36c00
> 2017-02-14T00:12:59.811591+03:00 storage9  ffffa4361dc36c00
> 2017-02-14T00:12:59.811724+03:00 storage9  ffffa44347792580
> 2017-02-14T00:12:59.811841+03:00 storage9
> 2017-02-14T00:12:59.811841+03:00 storage9 [24646.497951]  0000000000000000
> 2017-02-14T00:12:59.811846+03:00 storage9  ffffb1cab6343458
> 2017-02-14T00:12:59.811885+03:00 storage9  ffffffffb383e799
> 2017-02-14T00:12:59.811987+03:00 storage9  0000000000000000
> 2017-02-14T00:12:59.812103+03:00 storage9
> 2017-02-14T00:12:59.812103+03:00 storage9 [24646.498208]  ffffa443ffff7a00
> 2017-02-14T00:12:59.812103+03:00 storage9  0000000000000001
> 2017-02-14T00:12:59.812104+03:00 storage9  ffffb1cab6343448
> 2017-02-14T00:12:59.812233+03:00 storage9  0000000000000002
> 2017-02-14T00:12:59.812350+03:00 storage9
> 2017-02-14T00:12:59.812475+03:00 storage9 [24646.498462] Call Trace:
> 2017-02-14T00:12:59.812610+03:00 storage9 [24646.498587]
> [<ffffffffb383e799>] ? __schedule+0x179/0x5c8
> 2017-02-14T00:12:59.812733+03:00 storage9 [24646.498718]
> [<ffffffffb383ecc2>] ? schedule+0x32/0x80
> 2017-02-14T00:12:59.812869+03:00 storage9 [24646.498846]
> [<ffffffffb3841229>] ? schedule_timeout+0x159/0x2a0
> 2017-02-14T00:12:59.812997+03:00 storage9 [24646.498977]
> [<ffffffffb30f1450>] ? add_timer_on+0x130/0x130
> 2017-02-14T00:12:59.813130+03:00 storage9 [24646.499108]
> [<ffffffffb318ff13>] ? __alloc_pages_nodemask+0xe73/0x16b0
> 2017-02-14T00:12:59.813263+03:00 storage9 [24646.499240]
> [<ffffffffb31deb2a>] ? alloc_pages_current+0x9a/0x120
> 2017-02-14T00:12:59.813388+03:00 storage9 [24646.499371]
> [<ffffffffb33d1a51>] ? xfs_buf_allocate_memory+0x171/0x2c0
> 2017-02-14T00:12:59.813564+03:00 storage9 [24646.499503]
> [<ffffffffb337c48b>] ? xfs_buf_get_map+0x18b/0x1d0
> 2017-02-14T00:12:59.813654+03:00 storage9 [24646.499634]
> [<ffffffffb337cdcb>] ? xfs_buf_read_map+0x3b/0x160
> 2017-02-14T00:12:59.813783+03:00 storage9 [24646.499765]
> [<ffffffffb33b71b0>] ? xfs_trans_read_buf_map+0x1f0/0x490
> 2017-02-14T00:12:59.813931+03:00 storage9 [24646.499897]
> [<ffffffffb3362859>] ? xfs_imap_to_bp+0x79/0x120
> 2017-02-14T00:12:59.814056+03:00 storage9 [24646.500029]
> [<ffffffffb3394a38>] ? xfs_iflush+0x118/0x380
> 2017-02-14T00:12:59.814196+03:00 storage9 [24646.500158]
> [<ffffffffb30d6130>] ? wake_atomic_t_function+0x40/0x40
> 2017-02-14T00:12:59.814314+03:00 storage9 [24646.500289]
> [<ffffffffb3385af4>] ? xfs_reclaim_inode+0x274/0x3f0
> 2017-02-14T00:12:59.814444+03:00 storage9 [24646.500421]
> [<ffffffffb3385e29>] ? xfs_reclaim_inodes_ag+0x1b9/0x2c0
> 2017-02-14T00:12:59.814573+03:00 storage9 [24646.500553]
> [<ffffffffb345c6b8>] ? radix_tree_next_chunk+0x108/0x2a0
> 2017-02-14T00:12:59.814701+03:00 storage9 [24646.500685]
> [<ffffffffb30f0e31>] ? lock_timer_base+0x51/0x70
> 2017-02-14T00:12:59.814846+03:00 storage9 [24646.500814]
> [<ffffffffb345ca4e>] ? radix_tree_gang_lookup_tag+0xae/0x180
> 2017-02-14T00:12:59.814966+03:00 storage9 [24646.500946]
> [<ffffffffb336f588>] ? xfs_perag_get_tag+0x48/0x100
> 2017-02-14T00:12:59.815104+03:00 storage9 [24646.501079]
> [<ffffffffb31b63c1>] ? __list_lru_walk_one.isra.7+0x31/0x120
> 2017-02-14T00:12:59.815231+03:00 storage9 [24646.501212]
> [<ffffffffb322aed0>] ? iget5_locked+0x240/0x240
> 2017-02-14T00:12:59.815360+03:00 storage9 [24646.501342]
> [<ffffffffb33873c1>] ? xfs_reclaim_inodes_nr+0x31/0x40
> 2017-02-14T00:12:59.815489+03:00 storage9 [24646.501472]
> [<ffffffffb3213160>] ? super_cache_scan+0x1a0/0x1b0
> 2017-02-14T00:12:59.815629+03:00 storage9 [24646.501603]
> [<ffffffffb319bf62>] ? shrink_slab+0x262/0x440
> 2017-02-14T00:12:59.815760+03:00 storage9 [24646.501734]
> [<ffffffffb319cc8b>] ? drop_slab_node+0x2b/0x60
> 2017-02-14T00:12:59.815891+03:00 storage9 [24646.501864]
> [<ffffffffb319cd02>] ? drop_slab+0x42/0x70
> 2017-02-14T00:12:59.816020+03:00 storage9 [24646.501994]
> [<ffffffffb318abc0>] ? out_of_memory+0x220/0x560
> 2017-02-14T00:12:59.816160+03:00 storage9 [24646.502122]
> [<ffffffffb31903b2>] ? __alloc_pages_nodemask+0x1312/0x16b0
> 2017-02-14T00:12:59.816285+03:00 storage9 [24646.502255]
> [<ffffffffb31deb2a>] ? alloc_pages_current+0x9a/0x120
> 2017-02-14T00:12:59.816407+03:00 storage9 [24646.502386]
> [<ffffffffb304b803>] ? pte_alloc_one+0x13/0x40
> 2017-02-14T00:12:59.816536+03:00 storage9 [24646.502517]
> [<ffffffffb31be21f>] ? handle_mm_fault+0xc7f/0x14b0
> 2017-02-14T00:12:59.816659+03:00 storage9 [24646.502648]
> [<ffffffffb30461ff>] ? __do_page_fault+0x1cf/0x5a0
> 2017-02-14T00:12:59.816851+03:00 storage9 [24646.502777]
> [<ffffffffb3843362>] ? page_fault+0x22/0x30
> 

You're in inode reclaim, blocked on a memory allocation for an inode
buffer required to flush a dirty inode. I suppose this means that the
backing buffer for the inode has already been reclaimed and must be
re-read, which ideally wouldn't have occurred before the inode is
flushed.

> But it cannot get memory, because it's low (?). So it stays blocked.
> 
> Other processes do the same but they can't get past the mutex in
> xfs_reclaim_inodes_nr():
> 
...
> Which finally leads to "Kernel panic - not syncing: Out of memory and no
> killable processes..." as no process is able to proceed.
> 
> I quickly hacked this:
> 
> diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
> index 9ef152b..8adfb0a 100644
> --- a/fs/xfs/xfs_icache.c
> +++ b/fs/xfs/xfs_icache.c
> @@ -1254,7 +1254,7 @@ struct xfs_inode *
>         xfs_reclaim_work_queue(mp);
>         xfs_ail_push_all(mp->m_ail);
> 
> -       return xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT,
> &nr_to_scan);
> +       return 0; // xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT,
> &nr_to_scan);
>  }
> 

So you've disabled inode reclaim completely...

The bz shows you have non-default vm settings such as
'vm.vfs_cache_pressure = 200.' My understanding is that prefers
aggressive inode reclaim, yet the code workaround here is to bypass XFS
inode reclaim. Out of curiousity, have you reproduced this problem using
the default vfs_cache_pressure value (or if so, possibly moving it in
the other direction)?

Brian

> 
> We ran 2 of our machines with this patch for a night, no more lockups/stalls
> were detected.
> 
> xfsaild does its work asynchronously, so xfs_inodes don't run wild as
> confirmed by slabtop.
> 
> I put netconsole logs here: http://aplkv.beget.tech/lkml/xfs/ for anyone
> interested.
> 
> -- 
> Alexander Polakov | system software engineer | https://beget.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Polakov Feb. 15, 2017, 4:52 p.m. UTC | #2
On 02/15/2017 07:05 PM, Brian Foster wrote:
> You're in inode reclaim, blocked on a memory allocation for an inode
> buffer required to flush a dirty inode. I suppose this means that the
> backing buffer for the inode has already been reclaimed and must be
> re-read, which ideally wouldn't have occurred before the inode is
> flushed.
>
>> But it cannot get memory, because it's low (?). So it stays blocked.
>>
>> Other processes do the same but they can't get past the mutex in
>> xfs_reclaim_inodes_nr():
>>
> ...
>> Which finally leads to "Kernel panic - not syncing: Out of memory and no
>> killable processes..." as no process is able to proceed.
>>
>> I quickly hacked this:
>>
>> diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
>> index 9ef152b..8adfb0a 100644
>> --- a/fs/xfs/xfs_icache.c
>> +++ b/fs/xfs/xfs_icache.c
>> @@ -1254,7 +1254,7 @@ struct xfs_inode *
>>         xfs_reclaim_work_queue(mp);
>>         xfs_ail_push_all(mp->m_ail);
>>
>> -       return xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT,
>> &nr_to_scan);
>> +       return 0; // xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT,
>> &nr_to_scan);
>>  }
>>
>
> So you've disabled inode reclaim completely...

I don't think this is correct. I disabled direct / kswapd reclaim.
XFS uses background worker for async reclaim:

http://lxr.free-electrons.com/source/fs/xfs/xfs_icache.c#L178
http://lxr.free-electrons.com/source/fs/xfs/xfs_super.c#L1534

Confirmed by running trace-cmd on a patched kernel:

# trace-cmd record -p function -l xfs_reclaim_inodes -l xfs_reclaim_worker
# # trace-cmd report
CPU 0 is empty
CPU 2 is empty
CPU 3 is empty
CPU 5 is empty
CPU 8 is empty
CPU 10 is empty
CPU 11 is empty
cpus=16
     kworker/12:2-31208 [012] 106450.590216: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106450.590226: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106450.756879: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106450.756882: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106450.920212: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106450.920215: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106451.083549: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106451.083552: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106451.246882: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106451.246885: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106451.413546: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106451.413548: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106451.580215: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106451.580217: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106451.743549: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106451.743550: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106451.906882: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106451.906885: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106452.070216: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106452.070219: function: 
xfs_reclaim_inodes
      kworker/7:0-14419 [007] 106454.730218: function: 
xfs_reclaim_worker
      kworker/7:0-14419 [007] 106454.730227: function: 
xfs_reclaim_inodes
      kworker/1:0-14025 [001] 106455.340221: function: 
xfs_reclaim_worker
      kworker/1:0-14025 [001] 106455.340225: function: 
xfs_reclaim_inodes

> The bz shows you have non-default vm settings such as
> 'vm.vfs_cache_pressure = 200.' My understanding is that prefers
> aggressive inode reclaim, yet the code workaround here is to bypass XFS
> inode reclaim. Out of curiousity, have you reproduced this problem using
> the default vfs_cache_pressure value (or if so, possibly moving it in
> the other direction)?

Yes, we've tried that, it had about 0 influence.
Brian Foster Feb. 15, 2017, 6:09 p.m. UTC | #3
On Wed, Feb 15, 2017 at 07:52:13PM +0300, Alexander Polakov wrote:
> On 02/15/2017 07:05 PM, Brian Foster wrote:
> > You're in inode reclaim, blocked on a memory allocation for an inode
> > buffer required to flush a dirty inode. I suppose this means that the
> > backing buffer for the inode has already been reclaimed and must be
> > re-read, which ideally wouldn't have occurred before the inode is
> > flushed.
> > 
> > > But it cannot get memory, because it's low (?). So it stays blocked.
> > > 
> > > Other processes do the same but they can't get past the mutex in
> > > xfs_reclaim_inodes_nr():
> > > 
> > ...
> > > Which finally leads to "Kernel panic - not syncing: Out of memory and no
> > > killable processes..." as no process is able to proceed.
> > > 
> > > I quickly hacked this:
> > > 
> > > diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
> > > index 9ef152b..8adfb0a 100644
> > > --- a/fs/xfs/xfs_icache.c
> > > +++ b/fs/xfs/xfs_icache.c
> > > @@ -1254,7 +1254,7 @@ struct xfs_inode *
> > >         xfs_reclaim_work_queue(mp);
> > >         xfs_ail_push_all(mp->m_ail);
> > > 
> > > -       return xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT,
> > > &nr_to_scan);
> > > +       return 0; // xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT,
> > > &nr_to_scan);
> > >  }
> > > 
> > 
> > So you've disabled inode reclaim completely...
> 
> I don't think this is correct. I disabled direct / kswapd reclaim.
> XFS uses background worker for async reclaim:
> 
> http://lxr.free-electrons.com/source/fs/xfs/xfs_icache.c#L178
> http://lxr.free-electrons.com/source/fs/xfs/xfs_super.c#L1534
> 

Ah, Ok. It sounds like this allows the reclaim thread to carry on into
other shrinkers and free up memory that way, perhaps. This sounds kind
of similar to the issue brought up previously here[1], but not quite the
same in that instead of backing off of locking to allow other shrinkers
to progress, we back off of memory allocations required to free up
inodes (memory).

In theory, I think something analogous to a trylock for inode to buffer
mappings that are no longer cached (or more specifically, cannot
currently be allocated) may work around this, but it's not immediately
clear to me whether that's a proper fix (it's also probably not a
trivial change either). I'm still kind of curious why we end up with
dirty inodes with reclaimed buffers. If this problem repeats, is it
always with a similar stack (i.e., reclaim -> xfs_iflush() ->
xfs_imap_to_bp())?

How many independent filesystems are you running this workload against?
Can you describe the workload in more detail?

...
> > The bz shows you have non-default vm settings such as
> > 'vm.vfs_cache_pressure = 200.' My understanding is that prefers
> > aggressive inode reclaim, yet the code workaround here is to bypass XFS
> > inode reclaim. Out of curiousity, have you reproduced this problem using
> > the default vfs_cache_pressure value (or if so, possibly moving it in
> > the other direction)?
> 
> Yes, we've tried that, it had about 0 influence.
> 

Which.. with what values? And by zero influence, do you simply mean the
stall still occurred or you have some other measurement of slab sizes or
some such that are unaffected?

Brian

> -- 
> Alexander Polakov | system software engineer | https://beget.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Polakov Feb. 16, 2017, 10:56 a.m. UTC | #4
On 02/15/2017 09:09 PM, Brian Foster wrote:
> Ah, Ok. It sounds like this allows the reclaim thread to carry on into
> other shrinkers and free up memory that way, perhaps. This sounds kind
> of similar to the issue brought up previously here[1], but not quite the
> same in that instead of backing off of locking to allow other shrinkers
> to progress, we back off of memory allocations required to free up
> inodes (memory).
>
> In theory, I think something analogous to a trylock for inode to buffer
> mappings that are no longer cached (or more specifically, cannot
> currently be allocated) may work around this, but it's not immediately
> clear to me whether that's a proper fix (it's also probably not a
> trivial change either). I'm still kind of curious why we end up with
> dirty inodes with reclaimed buffers. If this problem repeats, is it
> always with a similar stack (i.e., reclaim -> xfs_iflush() ->
> xfs_imap_to_bp())?

Looks like it is.

> How many independent filesystems are you running this workload against?

storage9 : ~ [0] # mount|grep storage|grep xfs|wc -l
15
storage9 : ~ [0] # mount|grep storage|grep ext4|wc -l
44

> Can you describe the workload in more detail?

This is a backup server, we're running rsync. At night our production 
servers rsync their files to this server (a lot of small files).

> ...
>>> The bz shows you have non-default vm settings such as
>>> 'vm.vfs_cache_pressure = 200.' My understanding is that prefers
>>> aggressive inode reclaim, yet the code workaround here is to bypass XFS
>>> inode reclaim. Out of curiousity, have you reproduced this problem using
>>> the default vfs_cache_pressure value (or if so, possibly moving it in
>>> the other direction)?
>>
>> Yes, we've tried that, it had about 0 influence.
>>
>
> Which.. with what values? And by zero influence, do you simply mean the
> stall still occurred or you have some other measurement of slab sizes or
> some such that are unaffected?

Unfortunately I don't have slab statistics at hand. Stalls and following 
OOM situation still occured with this setting at 100.
Brian Foster Feb. 16, 2017, 5:20 p.m. UTC | #5
On Thu, Feb 16, 2017 at 01:56:30PM +0300, Alexander Polakov wrote:
> On 02/15/2017 09:09 PM, Brian Foster wrote:
> > Ah, Ok. It sounds like this allows the reclaim thread to carry on into
> > other shrinkers and free up memory that way, perhaps. This sounds kind
> > of similar to the issue brought up previously here[1], but not quite the
> > same in that instead of backing off of locking to allow other shrinkers
> > to progress, we back off of memory allocations required to free up
> > inodes (memory).
> > 
> > In theory, I think something analogous to a trylock for inode to buffer
> > mappings that are no longer cached (or more specifically, cannot
> > currently be allocated) may work around this, but it's not immediately
> > clear to me whether that's a proper fix (it's also probably not a
> > trivial change either). I'm still kind of curious why we end up with
> > dirty inodes with reclaimed buffers. If this problem repeats, is it
> > always with a similar stack (i.e., reclaim -> xfs_iflush() ->
> > xfs_imap_to_bp())?
> 
> Looks like it is.
> 
> > How many independent filesystems are you running this workload against?
> 
> storage9 : ~ [0] # mount|grep storage|grep xfs|wc -l
> 15
> storage9 : ~ [0] # mount|grep storage|grep ext4|wc -l
> 44
> 

So a decent number of fs', more ext4 than XFS. Are the XFS fs' all of
similar size/geometry? If so, can you send representative xfs_info
output for the fs'?

I'm reading back through that reclaim thread[1] and it appears this
indeed is not a straightforward issue. It sounds like the summary is
Chris hit the same general behavior you have and is helped by bypassing
the synchronous nature of the shrinker. This allows other shrinkers to
proceed, but this is not a general solution because other workloads
depend on the synchronous shrinker behavior to throttle direct reclaim.
I can't say I understand all of the details and architecture of how/why
that is the case.

FWIW, it sounds like the first order problem is that we generally don't
want to find/flush dirty inodes from reclaim. A couple things that might
help avoid this situation are more aggressive
/proc/sys/fs/xfs/xfssyncd_centisecs tuning or perhaps considering a
smaller log size would cause more tail pushing pressure on the AIL
instead of pressure originating from memory reclaim. The latter might
not be so convenient if this is an already populated backup server,
though.

Beyond that, there's Chris' patch, another patch that Dave proposed[2],
and obviously your hack here to defer inode reclaim entirely to the
workqueue (I've CC'd Dave since it sounds like he might have been
working on this further..). Unfortunately, it sounds like [1] or your
hack aren't things we can pull into mainline for the time being until
the broader reclaim queueing/throttling mechanism is in place. I suppose
we might be able to revisit [2] if it actually does enough to prevent
this problem...

[1] http://www.spinics.net/lists/linux-xfs/msg01541.html
[2] http://www.spinics.net/lists/linux-xfs/msg02212.html

Brian

> > Can you describe the workload in more detail?
> 
> This is a backup server, we're running rsync. At night our production
> servers rsync their files to this server (a lot of small files).
> 
> > ...
> > > > The bz shows you have non-default vm settings such as
> > > > 'vm.vfs_cache_pressure = 200.' My understanding is that prefers
> > > > aggressive inode reclaim, yet the code workaround here is to bypass XFS
> > > > inode reclaim. Out of curiousity, have you reproduced this problem using
> > > > the default vfs_cache_pressure value (or if so, possibly moving it in
> > > > the other direction)?
> > > 
> > > Yes, we've tried that, it had about 0 influence.
> > > 
> > 
> > Which.. with what values? And by zero influence, do you simply mean the
> > stall still occurred or you have some other measurement of slab sizes or
> > some such that are unaffected?
> 
> Unfortunately I don't have slab statistics at hand. Stalls and following OOM
> situation still occured with this setting at 100.
> 
> -- 
> Alexander Polakov | system software engineer | https://beget.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dave Chinner Feb. 16, 2017, 10:21 p.m. UTC | #6
On Thu, Feb 16, 2017 at 12:20:34PM -0500, Brian Foster wrote:
> On Thu, Feb 16, 2017 at 01:56:30PM +0300, Alexander Polakov wrote:
> > On 02/15/2017 09:09 PM, Brian Foster wrote:
> > > Ah, Ok. It sounds like this allows the reclaim thread to carry on into
> > > other shrinkers and free up memory that way, perhaps. This sounds kind
> > > of similar to the issue brought up previously here[1], but not quite the
> > > same in that instead of backing off of locking to allow other shrinkers
> > > to progress, we back off of memory allocations required to free up
> > > inodes (memory).
> > > 
> > > In theory, I think something analogous to a trylock for inode to buffer
> > > mappings that are no longer cached (or more specifically, cannot
> > > currently be allocated) may work around this, but it's not immediately
> > > clear to me whether that's a proper fix (it's also probably not a
> > > trivial change either). I'm still kind of curious why we end up with
> > > dirty inodes with reclaimed buffers. If this problem repeats, is it
> > > always with a similar stack (i.e., reclaim -> xfs_iflush() ->
> > > xfs_imap_to_bp())?
> > 
> > Looks like it is.
> > 
> > > How many independent filesystems are you running this workload against?
> > 
> > storage9 : ~ [0] # mount|grep storage|grep xfs|wc -l
> > 15
> > storage9 : ~ [0] # mount|grep storage|grep ext4|wc -l
> > 44
> > 
> 
> So a decent number of fs', more ext4 than XFS. Are the XFS fs' all of
> similar size/geometry? If so, can you send representative xfs_info
> output for the fs'?
> 
> I'm reading back through that reclaim thread[1] and it appears this
> indeed is not a straightforward issue. It sounds like the summary is
> Chris hit the same general behavior you have and is helped by bypassing
> the synchronous nature of the shrinker. This allows other shrinkers to
> proceed, but this is not a general solution because other workloads
> depend on the synchronous shrinker behavior to throttle direct reclaim.
> I can't say I understand all of the details and architecture of how/why
> that is the case.

It's complicated, made worse by the state of flux of the mm reclaim
subsystem and the frequent regressions in behaviour that come and
go. This makes testing modifications to the shrinker behaviour
extremely challenging - trying to separate shirnker artifacts from
"something else has changed in memory reclaim" takes a lot of
time....

> FWIW, it sounds like the first order problem is that we generally don't
> want to find/flush dirty inodes from reclaim.

Right. because that forces out-of-order inode writeback and it
degenerates into blocking small random writes.

> A couple things that might
> help avoid this situation are more aggressive
> /proc/sys/fs/xfs/xfssyncd_centisecs tuning or perhaps considering a
> smaller log size would cause more tail pushing pressure on the AIL
> instead of pressure originating from memory reclaim. The latter might
> not be so convenient if this is an already populated backup server,
> though.
> 
> Beyond that, there's Chris' patch, another patch that Dave proposed[2],
> and obviously your hack here to defer inode reclaim entirely to the
> workqueue (I've CC'd Dave since it sounds like he might have been
> working on this further..).

I was working on a more solid set of changes, but every time I
updated the kernel tree I used as my base for development, the
baseline kernel reclaim behaviour would change. I'd isolate the
behavioural change, upgrade to the kernel that contained the fix,
and then trip over some new whacky behaviour that made no sense. I
spent more time in this loop than actually trying to fix the XFS
problem - chasing a moving target makes finding the root cause of
the reclaim stalls just about impossible. 

Brian, I can send you what I have but it's really just a bag of
bolts at this point because I was never able to validate that any of
the patches made a measurable improvement to reclaim behaviour under
any workload I ran.....

FWIW, the major problem with removing the blocking in inode reclaim
is the ease with which you can then trigger the OOM killer from
userspace.  The high level memory reclaim algorithms break down when
there are hundreds of direct reclaim processes hammering on reclaim
and reclaim stops making progress because it's skipping dirty
objects.  Direct reclaim ends up insufficiently throttled, so rather
than blocking it winds up reclaim priority and then declares OOM
because reclaim runs out of retries before sufficient memory has
been freed.

That, right now, looks to be an unsolvable problem without a major
rework of direct reclaim.  I've pretty much given up on ever getting
the unbound direct reclaim concurrency problem that is causing us
these problems fixed, so we are left to handle it in the subsystem
shrinkers as best we can. That leaves us with an unfortunate choice: 

	a) throttle excessive concurrency in the shrinker to prevent
	   IO breakdown, thereby causing reclaim latency bubbles
	   under load but having a stable, reliable system; or
	b) optimise for minimal reclaim latency and risk userspace
	   memory demand triggering the OOM killer whenever there
	   are lots of dirty inodes in the system.

Quite frankly, there's only one choice we can make in this
situation: reliability is always more important than performance.

Cheers,

Dave.
Tetsuo Handa Feb. 17, 2017, 11:11 a.m. UTC | #7
On 2017/02/17 7:21, Dave Chinner wrote:
> FWIW, the major problem with removing the blocking in inode reclaim
> is the ease with which you can then trigger the OOM killer from
> userspace.  The high level memory reclaim algorithms break down when
> there are hundreds of direct reclaim processes hammering on reclaim
> and reclaim stops making progress because it's skipping dirty
> objects.  Direct reclaim ends up insufficiently throttled, so rather
> than blocking it winds up reclaim priority and then declares OOM
> because reclaim runs out of retries before sufficient memory has
> been freed.
> 
> That, right now, looks to be an unsolvable problem without a major
> rework of direct reclaim.  I've pretty much given up on ever getting
> the unbound direct reclaim concurrency problem that is causing us
> these problems fixed, so we are left to handle it in the subsystem
> shrinkers as best we can. That leaves us with an unfortunate choice: 
> 
> 	a) throttle excessive concurrency in the shrinker to prevent
> 	   IO breakdown, thereby causing reclaim latency bubbles
> 	   under load but having a stable, reliable system; or
> 	b) optimise for minimal reclaim latency and risk userspace
> 	   memory demand triggering the OOM killer whenever there
> 	   are lots of dirty inodes in the system.
> 
> Quite frankly, there's only one choice we can make in this
> situation: reliability is always more important than performance.

Is it possible to get rid of direct reclaim and let allocating thread
wait on queue? I wished such change in context of __GFP_KILLABLE at
http://lkml.kernel.org/r/201702012049.BAG95379.VJFFOHMStLQFOO@I-love.SAKURA.ne.jp .


--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Brian Foster Feb. 17, 2017, 7:05 p.m. UTC | #8
On Fri, Feb 17, 2017 at 09:21:29AM +1100, Dave Chinner wrote:
> On Thu, Feb 16, 2017 at 12:20:34PM -0500, Brian Foster wrote:
> > On Thu, Feb 16, 2017 at 01:56:30PM +0300, Alexander Polakov wrote:
> > > On 02/15/2017 09:09 PM, Brian Foster wrote:
> > > > Ah, Ok. It sounds like this allows the reclaim thread to carry on into
> > > > other shrinkers and free up memory that way, perhaps. This sounds kind
> > > > of similar to the issue brought up previously here[1], but not quite the
> > > > same in that instead of backing off of locking to allow other shrinkers
> > > > to progress, we back off of memory allocations required to free up
> > > > inodes (memory).
> > > > 
> > > > In theory, I think something analogous to a trylock for inode to buffer
> > > > mappings that are no longer cached (or more specifically, cannot
> > > > currently be allocated) may work around this, but it's not immediately
> > > > clear to me whether that's a proper fix (it's also probably not a
> > > > trivial change either). I'm still kind of curious why we end up with
> > > > dirty inodes with reclaimed buffers. If this problem repeats, is it
> > > > always with a similar stack (i.e., reclaim -> xfs_iflush() ->
> > > > xfs_imap_to_bp())?
> > > 
> > > Looks like it is.
> > > 
> > > > How many independent filesystems are you running this workload against?
> > > 
> > > storage9 : ~ [0] # mount|grep storage|grep xfs|wc -l
> > > 15
> > > storage9 : ~ [0] # mount|grep storage|grep ext4|wc -l
> > > 44
> > > 
> > 
> > So a decent number of fs', more ext4 than XFS. Are the XFS fs' all of
> > similar size/geometry? If so, can you send representative xfs_info
> > output for the fs'?
> > 
> > I'm reading back through that reclaim thread[1] and it appears this
> > indeed is not a straightforward issue. It sounds like the summary is
> > Chris hit the same general behavior you have and is helped by bypassing
> > the synchronous nature of the shrinker. This allows other shrinkers to
> > proceed, but this is not a general solution because other workloads
> > depend on the synchronous shrinker behavior to throttle direct reclaim.
> > I can't say I understand all of the details and architecture of how/why
> > that is the case.
> 
> It's complicated, made worse by the state of flux of the mm reclaim
> subsystem and the frequent regressions in behaviour that come and
> go. This makes testing modifications to the shrinker behaviour
> extremely challenging - trying to separate shirnker artifacts from
> "something else has changed in memory reclaim" takes a lot of
> time....
> 
> > FWIW, it sounds like the first order problem is that we generally don't
> > want to find/flush dirty inodes from reclaim.
> 
> Right. because that forces out-of-order inode writeback and it
> degenerates into blocking small random writes.
> 
> > A couple things that might
> > help avoid this situation are more aggressive
> > /proc/sys/fs/xfs/xfssyncd_centisecs tuning or perhaps considering a
> > smaller log size would cause more tail pushing pressure on the AIL
> > instead of pressure originating from memory reclaim. The latter might
> > not be so convenient if this is an already populated backup server,
> > though.
> > 
> > Beyond that, there's Chris' patch, another patch that Dave proposed[2],
> > and obviously your hack here to defer inode reclaim entirely to the
> > workqueue (I've CC'd Dave since it sounds like he might have been
> > working on this further..).
> 
> I was working on a more solid set of changes, but every time I
> updated the kernel tree I used as my base for development, the
> baseline kernel reclaim behaviour would change. I'd isolate the
> behavioural change, upgrade to the kernel that contained the fix,
> and then trip over some new whacky behaviour that made no sense. I
> spent more time in this loop than actually trying to fix the XFS
> problem - chasing a moving target makes finding the root cause of
> the reclaim stalls just about impossible. 
> 
> Brian, I can send you what I have but it's really just a bag of
> bolts at this point because I was never able to validate that any of
> the patches made a measurable improvement to reclaim behaviour under
> any workload I ran.....
> 

Sure, I'm curious to see what direction this goes in. I would think
anything that provides a backoff to other shrinkers would help this
particular workload where many different filesystems are active. FWIW,
I'd probably also need more details about what workloads you're testing
and how you're measuring improvements and whatnot to try and take any of
that stuff any farther (particularly how you verify the problems with
dropping blocking behavior entirely), though..

> FWIW, the major problem with removing the blocking in inode reclaim
> is the ease with which you can then trigger the OOM killer from
> userspace.  The high level memory reclaim algorithms break down when
> there are hundreds of direct reclaim processes hammering on reclaim
> and reclaim stops making progress because it's skipping dirty
> objects.  Direct reclaim ends up insufficiently throttled, so rather
> than blocking it winds up reclaim priority and then declares OOM
> because reclaim runs out of retries before sufficient memory has
> been freed.
> 

I'd need to spend some time in the shrinker code to grok this, but if
there's such a priority, would switching blocking behavior based on
priority provide a way to mitigate this problem from within the
shrinker? For example, provide non-blocking behavior on the lowest
priority to kick off flushing and allow progress into other shrinkers,
otherwise we flush and wait if the priority is elevated..?

IOW, it sounds like the problem in this case is that we subject the rest
of the allocation infrastructure to delays in configurations where we
are one of N potential shrinkers with reclaimable objects, because we
have to deal with this situation where our one shrinker actually is the
main/primary choke point for multiple allocator -> direct reclaimers.
I'm wondering if some kind of severity parameter managed by the shrinker
infra would help us distinguish between those scenarios (even if it were
a dumb LOW/HIGH priority param, where LOW allows for one pass through
all of the shrinkers to kick off I/O and whatnot before any one of them
should actually block on locks or I/O). Then again, I'm just handwaving
as I'm only just familiarizing with the context and problem.

(I also see no priority in struct shrink_control, so I guess that's an
internal reclaim thing as it is.)

> That, right now, looks to be an unsolvable problem without a major
> rework of direct reclaim.  I've pretty much given up on ever getting
> the unbound direct reclaim concurrency problem that is causing us
> these problems fixed, so we are left to handle it in the subsystem
> shrinkers as best we can. That leaves us with an unfortunate choice: 
> 
> 	a) throttle excessive concurrency in the shrinker to prevent
> 	   IO breakdown, thereby causing reclaim latency bubbles
> 	   under load but having a stable, reliable system; or
> 	b) optimise for minimal reclaim latency and risk userspace
> 	   memory demand triggering the OOM killer whenever there
> 	   are lots of dirty inodes in the system.
> 
> Quite frankly, there's only one choice we can make in this
> situation: reliability is always more important than performance.
> 

Indeed, that certainly makes sense. Thanks.

Brian

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dave Chinner Feb. 17, 2017, 11:52 p.m. UTC | #9
On Fri, Feb 17, 2017 at 02:05:00PM -0500, Brian Foster wrote:
> On Fri, Feb 17, 2017 at 09:21:29AM +1100, Dave Chinner wrote:
> > On Thu, Feb 16, 2017 at 12:20:34PM -0500, Brian Foster wrote:
> > > A couple things that might
> > > help avoid this situation are more aggressive
> > > /proc/sys/fs/xfs/xfssyncd_centisecs tuning or perhaps considering a
> > > smaller log size would cause more tail pushing pressure on the AIL
> > > instead of pressure originating from memory reclaim. The latter might
> > > not be so convenient if this is an already populated backup server,
> > > though.
> > > 
> > > Beyond that, there's Chris' patch, another patch that Dave proposed[2],
> > > and obviously your hack here to defer inode reclaim entirely to the
> > > workqueue (I've CC'd Dave since it sounds like he might have been
> > > working on this further..).
> > 
> > I was working on a more solid set of changes, but every time I
> > updated the kernel tree I used as my base for development, the
> > baseline kernel reclaim behaviour would change. I'd isolate the
> > behavioural change, upgrade to the kernel that contained the fix,
> > and then trip over some new whacky behaviour that made no sense. I
> > spent more time in this loop than actually trying to fix the XFS
> > problem - chasing a moving target makes finding the root cause of
> > the reclaim stalls just about impossible. 
> > 
> > Brian, I can send you what I have but it's really just a bag of
> > bolts at this point because I was never able to validate that any of
> > the patches made a measurable improvement to reclaim behaviour under
> > any workload I ran.....
> > 
> 
> Sure, I'm curious to see what direction this goes in. I would think
> anything that provides a backoff to other shrinkers would help this
> particular workload where many different filesystems are active. FWIW,
> I'd probably also need more details about what workloads you're testing
> and how you're measuring improvements and whatnot to try and take any of
> that stuff any farther (particularly how you verify the problems with
> dropping blocking behavior entirely), though..

Ok, I'll send you a copy...

> > FWIW, the major problem with removing the blocking in inode reclaim
> > is the ease with which you can then trigger the OOM killer from
> > userspace.  The high level memory reclaim algorithms break down when
> > there are hundreds of direct reclaim processes hammering on reclaim
> > and reclaim stops making progress because it's skipping dirty
> > objects.  Direct reclaim ends up insufficiently throttled, so rather
> > than blocking it winds up reclaim priority and then declares OOM
> > because reclaim runs out of retries before sufficient memory has
> > been freed.
> > 
> 
> I'd need to spend some time in the shrinker code to grok this, but if
> there's such a priority, would switching blocking behavior based on
> priority provide a way to mitigate this problem from within the
> shrinker? For example, provide non-blocking behavior on the lowest
> priority to kick off flushing and allow progress into other shrinkers,
> otherwise we flush and wait if the priority is elevated..?

I tried that - exporting the priority to the shrink_control and so
on. The system either behaved the same (i.e. stalled on reclaim) or
randomly fell into a screaming pile of OOM killer rage-death. There
was no in-between reliable state...

> IOW, it sounds like the problem in this case is that we subject the rest
> of the allocation infrastructure to delays in configurations where we
> are one of N potential shrinkers with reclaimable objects, because we
> have to deal with this situation where our one shrinker actually is the
> main/primary choke point for multiple allocator -> direct reclaimers.

Yup. The mm reclaim design sucks ass because if we use purely
non-blocking reclaim techniques in the shrinkers we have no way of
throttling allocation demand to the rate at which shrinkers can
reclaim objects and we end up with gross cache imbalances under
memory pressure.

> I'm wondering if some kind of severity parameter managed by the shrinker
> infra would help us distinguish between those scenarios (even if it were
> a dumb LOW/HIGH priority param, where LOW allows for one pass through
> all of the shrinkers to kick off I/O and whatnot before any one of them
> should actually block on locks or I/O). Then again, I'm just handwaving
> as I'm only just familiarizing with the context and problem.

Tried that, too. :( Several different ways over the past few years.
Remember that we also have the non-blocking background reclaim
thread - even triggering that immediately on reclaim doesn't
prevent stalls, and my last patchset specifically excluded it
from concurrency control so it always ran immediately on reclaim

Hmmm - I just had a thought.

Perhaps all we need to do is remove direct reclaim from the
shrinker. i.e. make the background reclaimer run on a ticket based
queuing system similar to the log reservations...

For non-blocking reclaim (i.e.  kswapd), we just queue the ticket
and return immediately, saying we've done the scan. 

For blocking reclaim, we take a ticket with the blocking scan count
on it and wait in the wakeup queue.  As background reclaim runs, it
decrements the lead ticket scan count. When it hits zero, wake up
the head of the wait queue. When the background thread has no
pending reclaim tickets or has nothing left to scan/reclaim (i.e.
goes idle), wake everyone....

This means waiting is ordered, it doesn't matter where in the fs we
reclaim from, direct reclaim itself doesn't get stuck waiting for
IO/transactions/log flushing, concurrency is managed by how we
process the ticket queue, and when we have lots of direct reclaim
demand then the background thread just keeps running as optimally as
possible and kswapd is never blocked...

> (I also see no priority in struct shrink_control, so I guess that's an
> internal reclaim thing as it is.)

Yup, I had to promote it all the way through from the struct
scan_control -> priority field.

Cheers,

Dave.
Dave Chinner Feb. 17, 2017, 11:58 p.m. UTC | #10
On Fri, Feb 17, 2017 at 08:11:09PM +0900, Tetsuo Handa wrote:
> On 2017/02/17 7:21, Dave Chinner wrote:
> > FWIW, the major problem with removing the blocking in inode reclaim
> > is the ease with which you can then trigger the OOM killer from
> > userspace.  The high level memory reclaim algorithms break down when
> > there are hundreds of direct reclaim processes hammering on reclaim
> > and reclaim stops making progress because it's skipping dirty
> > objects.  Direct reclaim ends up insufficiently throttled, so rather
> > than blocking it winds up reclaim priority and then declares OOM
> > because reclaim runs out of retries before sufficient memory has
> > been freed.
> > 
> > That, right now, looks to be an unsolvable problem without a major
> > rework of direct reclaim.  I've pretty much given up on ever getting
> > the unbound direct reclaim concurrency problem that is causing us
> > these problems fixed, so we are left to handle it in the subsystem
> > shrinkers as best we can. That leaves us with an unfortunate choice: 
> > 
> > 	a) throttle excessive concurrency in the shrinker to prevent
> > 	   IO breakdown, thereby causing reclaim latency bubbles
> > 	   under load but having a stable, reliable system; or
> > 	b) optimise for minimal reclaim latency and risk userspace
> > 	   memory demand triggering the OOM killer whenever there
> > 	   are lots of dirty inodes in the system.
> > 
> > Quite frankly, there's only one choice we can make in this
> > situation: reliability is always more important than performance.
> 
> Is it possible to get rid of direct reclaim and let allocating thread
> wait on queue? I wished such change in context of __GFP_KILLABLE at
> http://lkml.kernel.org/r/201702012049.BAG95379.VJFFOHMStLQFOO@I-love.SAKURA.ne.jp .

Yup, that's similar to what I've been suggesting - offloading the
direct reclaim slowpath to a limited set of kswapd-like workers
and blocking the allocating processes until there is either memory
for them or OOM is declared...

Cheers,

Dave.
Brian Foster Feb. 18, 2017, 1:05 p.m. UTC | #11
On Sat, Feb 18, 2017 at 10:52:45AM +1100, Dave Chinner wrote:
> On Fri, Feb 17, 2017 at 02:05:00PM -0500, Brian Foster wrote:
> > On Fri, Feb 17, 2017 at 09:21:29AM +1100, Dave Chinner wrote:
> > > On Thu, Feb 16, 2017 at 12:20:34PM -0500, Brian Foster wrote:
> > > > A couple things that might
> > > > help avoid this situation are more aggressive
> > > > /proc/sys/fs/xfs/xfssyncd_centisecs tuning or perhaps considering a
> > > > smaller log size would cause more tail pushing pressure on the AIL
> > > > instead of pressure originating from memory reclaim. The latter might
> > > > not be so convenient if this is an already populated backup server,
> > > > though.
> > > > 
> > > > Beyond that, there's Chris' patch, another patch that Dave proposed[2],
> > > > and obviously your hack here to defer inode reclaim entirely to the
> > > > workqueue (I've CC'd Dave since it sounds like he might have been
> > > > working on this further..).
> > > 
> > > I was working on a more solid set of changes, but every time I
> > > updated the kernel tree I used as my base for development, the
> > > baseline kernel reclaim behaviour would change. I'd isolate the
> > > behavioural change, upgrade to the kernel that contained the fix,
> > > and then trip over some new whacky behaviour that made no sense. I
> > > spent more time in this loop than actually trying to fix the XFS
> > > problem - chasing a moving target makes finding the root cause of
> > > the reclaim stalls just about impossible. 
> > > 
> > > Brian, I can send you what I have but it's really just a bag of
> > > bolts at this point because I was never able to validate that any of
> > > the patches made a measurable improvement to reclaim behaviour under
> > > any workload I ran.....
> > > 
> > 
> > Sure, I'm curious to see what direction this goes in. I would think
> > anything that provides a backoff to other shrinkers would help this
> > particular workload where many different filesystems are active. FWIW,
> > I'd probably also need more details about what workloads you're testing
> > and how you're measuring improvements and whatnot to try and take any of
> > that stuff any farther (particularly how you verify the problems with
> > dropping blocking behavior entirely), though..
> 
> Ok, I'll send you a copy...
> 

Thanks.

> > > FWIW, the major problem with removing the blocking in inode reclaim
> > > is the ease with which you can then trigger the OOM killer from
> > > userspace.  The high level memory reclaim algorithms break down when
> > > there are hundreds of direct reclaim processes hammering on reclaim
> > > and reclaim stops making progress because it's skipping dirty
> > > objects.  Direct reclaim ends up insufficiently throttled, so rather
> > > than blocking it winds up reclaim priority and then declares OOM
> > > because reclaim runs out of retries before sufficient memory has
> > > been freed.
> > > 
> > 
> > I'd need to spend some time in the shrinker code to grok this, but if
> > there's such a priority, would switching blocking behavior based on
> > priority provide a way to mitigate this problem from within the
> > shrinker? For example, provide non-blocking behavior on the lowest
> > priority to kick off flushing and allow progress into other shrinkers,
> > otherwise we flush and wait if the priority is elevated..?
> 
> I tried that - exporting the priority to the shrink_control and so
> on. The system either behaved the same (i.e. stalled on reclaim) or
> randomly fell into a screaming pile of OOM killer rage-death. There
> was no in-between reliable state...
> 

Hmm, Ok, well at least I'm not completely off the rails then. :)

> > IOW, it sounds like the problem in this case is that we subject the rest
> > of the allocation infrastructure to delays in configurations where we
> > are one of N potential shrinkers with reclaimable objects, because we
> > have to deal with this situation where our one shrinker actually is the
> > main/primary choke point for multiple allocator -> direct reclaimers.
> 
> Yup. The mm reclaim design sucks ass because if we use purely
> non-blocking reclaim techniques in the shrinkers we have no way of
> throttling allocation demand to the rate at which shrinkers can
> reclaim objects and we end up with gross cache imbalances under
> memory pressure.
> 
> > I'm wondering if some kind of severity parameter managed by the shrinker
> > infra would help us distinguish between those scenarios (even if it were
> > a dumb LOW/HIGH priority param, where LOW allows for one pass through
> > all of the shrinkers to kick off I/O and whatnot before any one of them
> > should actually block on locks or I/O). Then again, I'm just handwaving
> > as I'm only just familiarizing with the context and problem.
> 
> Tried that, too. :( Several different ways over the past few years.
> Remember that we also have the non-blocking background reclaim
> thread - even triggering that immediately on reclaim doesn't
> prevent stalls, and my last patchset specifically excluded it
> from concurrency control so it always ran immediately on reclaim
> 
> Hmmm - I just had a thought.
> 
> Perhaps all we need to do is remove direct reclaim from the
> shrinker. i.e. make the background reclaimer run on a ticket based
> queuing system similar to the log reservations...
> 
> For non-blocking reclaim (i.e.  kswapd), we just queue the ticket
> and return immediately, saying we've done the scan. 
> 
> For blocking reclaim, we take a ticket with the blocking scan count
> on it and wait in the wakeup queue.  As background reclaim runs, it
> decrements the lead ticket scan count. When it hits zero, wake up
> the head of the wait queue. When the background thread has no
> pending reclaim tickets or has nothing left to scan/reclaim (i.e.
> goes idle), wake everyone....
> 
> This means waiting is ordered, it doesn't matter where in the fs we
> reclaim from, direct reclaim itself doesn't get stuck waiting for
> IO/transactions/log flushing, concurrency is managed by how we
> process the ticket queue, and when we have lots of direct reclaim
> demand then the background thread just keeps running as optimally as
> possible and kswapd is never blocked...
> 

Interesting.. IIUC, what this adds is smarter waiting logic by mapping
actual calling behavior into the shrinker to underlying reclaim
progress, rather than relying on whatever higher level priority is
calculated in the shrinker mechanism..? Sounds like a neat idea...

Brian

> > (I also see no priority in struct shrink_control, so I guess that's an
> > internal reclaim thing as it is.)
> 
> Yup, I had to promote it all the way through from the struct
> scan_control -> priority field.
> 
> Cheers,
> 
> Dave.
> 
> -- 
> Dave Chinner
> david@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
index 9ef152b..8adfb0a 100644
--- a/fs/xfs/xfs_icache.c
+++ b/fs/xfs/xfs_icache.c
@@ -1254,7 +1254,7 @@  struct xfs_inode *
         xfs_reclaim_work_queue(mp);
         xfs_ail_push_all(mp->m_ail);

-       return xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT, 
&nr_to_scan);
+       return 0; // xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT, 
&nr_to_scan);
  }