From patchwork Tue Jun 22 07:46:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 12336529 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A291C2B9F4 for ; Tue, 22 Jun 2021 07:46:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 049C86128C for ; Tue, 22 Jun 2021 07:46:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 049C86128C Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1E6206B0036; Tue, 22 Jun 2021 03:46:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 196ED6B0062; Tue, 22 Jun 2021 03:46:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 00FAB6B006C; Tue, 22 Jun 2021 03:46:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0234.hostedemail.com [216.40.44.234]) by kanga.kvack.org (Postfix) with ESMTP id C6F3B6B0036 for ; Tue, 22 Jun 2021 03:46:50 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 4A4CA180ACF62 for ; Tue, 22 Jun 2021 07:46:50 +0000 (UTC) X-FDA: 78280578180.36.1272F85 Received: from mail-qt1-f202.google.com (mail-qt1-f202.google.com [209.85.160.202]) by imf15.hostedemail.com (Postfix) with ESMTP id ED36FA0021F7 for ; Tue, 22 Jun 2021 07:46:49 +0000 (UTC) Received: by mail-qt1-f202.google.com with SMTP id g4-20020ac80ac40000b029024ead0ebb62so13311163qti.13 for ; Tue, 22 Jun 2021 00:46:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=nzvT8150dKSzD2CTYzZbK4GaiO+jD+zKdnfKQyV1jVw=; b=r79dzA6BYoKNzyWPOU/rrsv8+HHCXh089S9sqhKK+IgRi7PVHy1z3BNsy0C42D7kOT ulhnG96TVG0htbeONot4VvsLIL+iuOC7YATlITbfVJwoDY3ANTBPztEPfCc0nJdwhpF6 zqaSDZMDOjl7xXk+FrSQaTx47OX8rXB9S7jJG5DOXOjitK+3QD81C4j2ZjRrO3pqE/TF Cna+nKcbm+A18rEBPE0B86RiB/pbkn8Nr9Ki78dy9l7LctkFhe41SAuVevOi4V8F9Ba8 WCnUpfU/no44gCdfA1QyP4msPxeJqGR5k7Ha28CP6lPMYM5oV/C2zCnmRNd6OYpElaZp n3iQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=nzvT8150dKSzD2CTYzZbK4GaiO+jD+zKdnfKQyV1jVw=; b=cuxZjbF2DlrFbF9LVWkn29j0UVbAtZfJcLcbbYSQgvq4JpsfGb1sT8Owrt91fWfmzk M3jLSAcQVLxvByZySDTk72Bf8e92ZF1LGxrJCe49A5sQay5zzjIZFcLQnQY+H++4I0eQ aVsfiDrO1NHkucBx+CDkHSJADQgxko/0KoBvDOCIvJcgYO1wz0D8kg5mt8X3IDpXogX2 3aSNkt9IsgQwO3XqmM7ESp5d5sVyXmoCaeM7cGPcSv91JI4FeV9AjZklavuA5Rd60lax Sl5yVukIKwTb2IY3o0DVRiauwiLjUg1mx+e5JTEH3920LWsrAfuZKt698FucuTEFZUYg mhHA== X-Gm-Message-State: AOAM533loO5bVdMY+9o0+H9ie+OIbCgOekR+AYVU0DXWhkOSBlBbglMK oRoRxepFZdIEcfgagIXXSZytGAt6V2A= X-Google-Smtp-Source: ABdhPJyaoa3CXT/k4et1PDb9JNnD/wdJKkTKcBuEUVLzB6DnOI+pdhvpsr2frxtACaR3a0mrLmS/H2ke2GM= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:6a2f:afe8:c703:ad74]) (user=yuzhao job=sendgmr) by 2002:a25:424c:: with SMTP id p73mr3059137yba.28.1624348009092; Tue, 22 Jun 2021 00:46:49 -0700 (PDT) Date: Tue, 22 Jun 2021 01:46:42 -0600 In-Reply-To: <20210614151030.35fd6ecfecc2e3df2d7c5dc0@linux-foundation.org> Message-Id: <20210622074642.785473-1-yuzhao@google.com> Mime-Version: 1.0 References: <20210614151030.35fd6ecfecc2e3df2d7c5dc0@linux-foundation.org> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH v2] mm/vmscan.c: fix potential deadlock in reclaim_pages() From: Yu Zhao To: Andrew Morton , Minchan Kim Cc: linux-mm@kvack.org, Yu Zhao Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=r79dzA6B; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf15.hostedemail.com: domain of 3aZXRYAYKCOYgchPIWOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--yuzhao.bounces.google.com designates 209.85.160.202 as permitted sender) smtp.mailfrom=3aZXRYAYKCOYgchPIWOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--yuzhao.bounces.google.com X-Stat-Signature: diotp4zr9j5arbyrfe7x6499fehezmff X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: ED36FA0021F7 X-HE-Tag: 1624348009-620851 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Theoretically without the protect from memalloc_noreclaim_save() and memalloc_noreclaim_restore(), reclaim_pages() can go into the block I/O layer recursively and deadlock. Querying 'reclaim_pages' in our kernel crash databases didn't yield any results. So the deadlock seems unlikely to happen. A possible explanation is that the only user of reclaim_pages(), i.e., MADV_PAGEOUT, is usually called before memory pressure builds up, e.g., on Android and Chrome OS. Under such a condition, allocations in the block I/O layer can be fulfilled without diverting to direct reclaim and therefore the recursion is avoided. Signed-off-by: Yu Zhao --- v1 -> v2: update the commit message pre request mm/vmscan.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/mm/vmscan.c b/mm/vmscan.c index 5199b9696bab..2a02739b20f4 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1701,6 +1701,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, unsigned int nr_reclaimed; struct page *page, *next; LIST_HEAD(clean_pages); + unsigned int noreclaim_flag; list_for_each_entry_safe(page, next, page_list, lru) { if (!PageHuge(page) && page_is_file_lru(page) && @@ -1711,8 +1712,17 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, } } + /* + * We should be safe here since we are only dealing with file pages and + * we are not kswapd and therefore cannot write dirty file pages. But + * call memalloc_noreclaim_save() anyway, just in case these conditions + * change in the future. + */ + noreclaim_flag = memalloc_noreclaim_save(); nr_reclaimed = shrink_page_list(&clean_pages, zone->zone_pgdat, &sc, &stat, true); + memalloc_noreclaim_restore(noreclaim_flag); + list_splice(&clean_pages, page_list); mod_node_page_state(zone->zone_pgdat, NR_ISOLATED_FILE, -(long)nr_reclaimed); @@ -2306,6 +2316,7 @@ unsigned long reclaim_pages(struct list_head *page_list) LIST_HEAD(node_page_list); struct reclaim_stat dummy_stat; struct page *page; + unsigned int noreclaim_flag; struct scan_control sc = { .gfp_mask = GFP_KERNEL, .priority = DEF_PRIORITY, @@ -2314,6 +2325,8 @@ unsigned long reclaim_pages(struct list_head *page_list) .may_swap = 1, }; + noreclaim_flag = memalloc_noreclaim_save(); + while (!list_empty(page_list)) { page = lru_to_page(page_list); if (nid == NUMA_NO_NODE) { @@ -2350,6 +2363,8 @@ unsigned long reclaim_pages(struct list_head *page_list) } } + memalloc_noreclaim_restore(noreclaim_flag); + return nr_reclaimed; }