From patchwork Mon Jun 15 11:56:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 11604609 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E693413B1 for ; Mon, 15 Jun 2020 11:57:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A5ACB20757 for ; Mon, 15 Jun 2020 11:57:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BTfNvIMf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A5ACB20757 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C6F026B0002; Mon, 15 Jun 2020 07:57:06 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C45BA6B0003; Mon, 15 Jun 2020 07:57:06 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B5BFD6B0005; Mon, 15 Jun 2020 07:57:06 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0134.hostedemail.com [216.40.44.134]) by kanga.kvack.org (Postfix) with ESMTP id 9DFD66B0002 for ; Mon, 15 Jun 2020 07:57:06 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 5AA1F8245571 for ; Mon, 15 Jun 2020 11:57:06 +0000 (UTC) X-FDA: 76931295252.15.men46_13150fb26df6 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 085D31814B0C7 for ; Mon, 15 Jun 2020 11:57:06 +0000 (UTC) X-Spam-Summary: 2,0,0,79d4979f32a85b14,d41d8cd98f00b204,laoar.shao@gmail.com,,RULES_HIT:2:41:355:379:541:800:960:966:973:988:989:1260:1345:1437:1535:1606:1730:1747:1777:1792:1801:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3874:4119:4250:4321:4385:4605:5007:6119:6261:6653:7514:7875:7903:7904:8660:8784:9413:10004:11026:11233:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12663:12895:12986:13148:13180:13221:13229:13230:13870:14093:14096:14394:14687:21080:21324:21433:21444:21451:21524:21627:21666:21740:21939:21983:21990:30012:30054:30075,0,RBL:209.85.216.68:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: men46_13150fb26df6 X-Filterd-Recvd-Size: 8540 Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Mon, 15 Jun 2020 11:57:05 +0000 (UTC) Received: by mail-pj1-f68.google.com with SMTP id k2so6730943pjs.2 for ; Mon, 15 Jun 2020 04:57:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=gVcbcyTlRqPazeasAkwcsrlUVsAvkK8+VEUn4G1iSYY=; b=BTfNvIMfJ6gr9Pfw99J9zJus+tDUlzNJVQjNucghwPj/0M2qYvowXv/Y4eQ4WxT9Px uymmW1nSeEY26v8R9ee3X6iSrO70Tc1V49/tosmISr5MztnvGMPJoSPWxPas29H6S6hX WmYhbnRUeV4iP20v8U+ZOjq/N4JtN4/XwFCPqerfmcij6rYW0x7IVlm5JKetoXuvis06 kyIhaRmoZ9QDQu1xFK01TMBJ0hX9WMtHFWYIURHmzX+LNvKclJScrXE0OVzWKOrDrM6W 2m76F39plzUqu8rEBMyjdGgYhL7fzG7JLVdGQxExFfCAsPO9pfKYJVWHQctQin33EjOw o0Jg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=gVcbcyTlRqPazeasAkwcsrlUVsAvkK8+VEUn4G1iSYY=; b=s9nimOxSJ3GT7kutzw6+GJJqzwQQBHh61k22fhf2a7LCnzxxx2Lzxi20hKGwGh00gD dJkoAFhXc9gO09GrzjTeIJRAjXee1qL7uwV763u3s1jBxwecC658435z1gHNjpbNqUSx il+AkGVrOEodWsOsmiWVqD4NLkZwjjtLa6fydoqLsTx4aD7LlNYzPxjqjvVgpqtEPtXX GUFfVQoc50424kGZa73vgGRMSbNNhMYlF0AgTn3VbcJRSto61Gn7cFazlhboh9nB9ekA s/mG+jcz5885Q2b6wwVbo/i3cixGB1kYr9t4JVnVzlqGlUQk6pKM5ujNgj+9qtfBRdBl rrbg== X-Gm-Message-State: AOAM531ybEIpLnzziXm4gX/J+5007wzMethE/h1Uh/RaS5xh9fyw1xMM rNBud4scSsCYogAevg2d2aI= X-Google-Smtp-Source: ABdhPJxKQTo8bq8Xf3TZd6VqFqeI5sqDhieN9bPGZg/PQolNFYwYe1PEhAgrTOda1rc6ynBhP33ujA== X-Received: by 2002:a17:90a:74cb:: with SMTP id p11mr12030279pjl.89.1592222224668; Mon, 15 Jun 2020 04:57:04 -0700 (PDT) Received: from dev.localdomain ([203.100.54.194]) by smtp.gmail.com with ESMTPSA id dw17sm12442392pjb.40.2020.06.15.04.57.01 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 15 Jun 2020 04:57:03 -0700 (PDT) From: Yafang Shao To: hch@infradead.org, david@fromorbit.com, darrick.wong@oracle.com, mhocko@kernel.org, akpm@linux-foundation.org Cc: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Yafang Shao Subject: [PATCH v3] xfs: avoid deadlock when trigger memory reclaim in ->writepages Date: Mon, 15 Jun 2020 07:56:21 -0400 Message-Id: <1592222181-9832-1-git-send-email-laoar.shao@gmail.com> X-Mailer: git-send-email 1.8.3.1 X-Rspamd-Queue-Id: 085D31814B0C7 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Recently there is a XFS deadlock on our server with an old kernel. This deadlock is caused by allocating memory in xfs_map_blocks() while doing writeback on behalf of memroy reclaim. Although this deadlock happens on an old kernel, I think it could happen on the upstream as well. This issue only happens once and can't be reproduced, so I haven't tried to reproduce it on upsteam kernel. Bellow is the call trace of this deadlock. [480594.790087] INFO: task redis-server:16212 blocked for more than 120 seconds. [480594.790087] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [480594.790088] redis-server D ffffffff8168bd60 0 16212 14347 0x00000004 [480594.790090] ffff880da128f070 0000000000000082 ffff880f94a2eeb0 ffff880da128ffd8 [480594.790092] ffff880da128ffd8 ffff880da128ffd8 ffff880f94a2eeb0 ffff88103f9d6c40 [480594.790094] 0000000000000000 7fffffffffffffff ffff88207ffc0ee8 ffffffff8168bd60 [480594.790096] Call Trace: [480594.790101] [] schedule+0x29/0x70 [480594.790103] [] schedule_timeout+0x239/0x2c0 [480594.790111] [] io_schedule_timeout+0xae/0x130 [480594.790114] [] io_schedule+0x18/0x20 [480594.790116] [] bit_wait_io+0x11/0x50 [480594.790118] [] __wait_on_bit+0x65/0x90 [480594.790121] [] wait_on_page_bit+0x81/0xa0 [480594.790125] [] shrink_page_list+0x6d2/0xaf0 [480594.790130] [] shrink_inactive_list+0x223/0x710 [480594.790135] [] shrink_lruvec+0x3b5/0x810 [480594.790139] [] shrink_zone+0xba/0x1e0 [480594.790141] [] do_try_to_free_pages+0x100/0x510 [480594.790143] [] try_to_free_mem_cgroup_pages+0xdd/0x170 [480594.790145] [] mem_cgroup_reclaim+0x4e/0x120 [480594.790147] [] __mem_cgroup_try_charge+0x41c/0x670 [480594.790153] [] __memcg_kmem_newpage_charge+0xf6/0x180 [480594.790157] [] __alloc_pages_nodemask+0x22d/0x420 [480594.790162] [] alloc_pages_current+0xaa/0x170 [480594.790165] [] new_slab+0x30c/0x320 [480594.790168] [] ___slab_alloc+0x3ac/0x4f0 [480594.790204] [] __slab_alloc+0x40/0x5c [480594.790206] [] kmem_cache_alloc+0x193/0x1e0 [480594.790233] [] kmem_zone_alloc+0x97/0x130 [xfs] [480594.790247] [] _xfs_trans_alloc+0x3a/0xa0 [xfs] [480594.790261] [] xfs_trans_alloc+0x3c/0x50 [xfs] [480594.790276] [] xfs_iomap_write_allocate+0x1cb/0x390 [xfs] [480594.790299] [] xfs_map_blocks+0x1a6/0x210 [xfs] [480594.790312] [] xfs_do_writepage+0x17b/0x550 [xfs] [480594.790314] [] write_cache_pages+0x251/0x4d0 [xfs] [480594.790338] [] xfs_vm_writepages+0xc5/0xe0 [xfs] [480594.790341] [] do_writepages+0x1e/0x40 [480594.790343] [] __filemap_fdatawrite_range+0x65/0x80 [480594.790346] [] filemap_write_and_wait_range+0x41/0x90 [480594.790360] [] xfs_file_fsync+0x66/0x1e0 [xfs] [480594.790363] [] do_fsync+0x65/0xa0 [480594.790365] [] SyS_fdatasync+0x13/0x20 [480594.790367] [] system_call_fastpath+0x16/0x1b Note that xfs_iomap_write_allocate() is replaced by xfs_convert_blocks() in commit 4ad765edb02a ("xfs: move xfs_iomap_write_allocate to xfs_aops.c") and write_cache_pages() is replaced by iomap_writepages() in commit 598ecfbaa742 ("iomap: lift the xfs writeback code to iomap"). So for upsteam, the call trace should be, xfs_vm_writepages -> iomap_writepages -> write_cache_pages -> iomap_do_writepage -> xfs_map_blocks -> xfs_convert_blocks -> xfs_bmapi_convert_delalloc -> xfs_trans_alloc //It should alloc page with GFP_NOFS I'm not sure whether it is proper to add the GFP_NOFS to all the ->writepags, so I only add it for XFS. Stefan also reported that he saw this issue two times while under memory pressure, So I add his reported-by. Reported-by: Stefan Priebe - Profihost AG Signed-off-by: Yafang Shao Reported-by: kernel test robot --- v2 -> v3: - retitle the subject from "iomap: avoid deadlock if memory reclaim is triggered in writepage path" - set GFP_NOFS only for XFS ->writepages v1 -> v2: - retitle the subject from "xfs: avoid deadlock when tigger memory reclam in xfs_map_blocks()" - set GFP_NOFS in iomap_do_writepage(), per Dave --- fs/xfs/xfs_aops.c | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index b356118..1ccfbf2 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -573,9 +573,21 @@ static inline bool xfs_ioend_needs_workqueue(struct iomap_ioend *ioend) struct writeback_control *wbc) { struct xfs_writepage_ctx wpc = { }; + unsigned int nofs_flag; + int ret; xfs_iflags_clear(XFS_I(mapping->host), XFS_ITRUNCATED); - return iomap_writepages(mapping, wbc, &wpc.ctx, &xfs_writeback_ops); + + /* + * We can allocate memory here while doing writeback on behalf of + * memory reclaim. To avoid memory allocation deadlocks set the + * task-wide nofs context for the following operations. + */ + nofs_flag = memalloc_nofs_save(); + ret = iomap_writepages(mapping, wbc, &wpc.ctx, &xfs_writeback_ops); + memalloc_nofs_restore(nofs_flag); + + return ret; } STATIC int