From patchwork Mon Jul 9 13:13:07 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?UTF-8?B?6KOY56iA55+zKOeogOefsyk=?= X-Patchwork-Id: 10514561 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C9D526032A for ; Mon, 9 Jul 2018 13:13:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B3B4A26246 for ; Mon, 9 Jul 2018 13:13:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A74D628AFA; Mon, 9 Jul 2018 13:13:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FROM_EXCESS_BASE64, HTML_MESSAGE, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6590926246 for ; Mon, 9 Jul 2018 13:13:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 213666B02D3; Mon, 9 Jul 2018 09:13:13 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1C36D6B02D4; Mon, 9 Jul 2018 09:13:13 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 066166B02D5; Mon, 9 Jul 2018 09:13:13 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf0-f197.google.com (mail-pf0-f197.google.com [209.85.192.197]) by kanga.kvack.org (Postfix) with ESMTP id 4A2F96B02D3 for ; Mon, 9 Jul 2018 09:13:12 -0400 (EDT) Received: by mail-pf0-f197.google.com with SMTP id n17-v6so11766614pff.10 for ; Mon, 09 Jul 2018 06:13:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:date:from:to:cc:reply-to :message-id:subject:mime-version:references:in-reply-to; bh=ScBtP543gBERUrR3jo7CdAeWjtxQFCYhDAr62jOo6Dg=; b=kdgfJujrrkVLKimsdXE4W7+8FWkhSIdiBbWx1TnerAd2TDxuSJMBPnhpJoAXkTKtir +Y3ko7j+vxhYFtIcXT5lmiBbZF5KR+C7vs9G2c3Z+iFY+4c/qI5J4FrpCruWZoXaESPJ UWQobWBR+z10TBBzSXDxPkvuwpuVaXNMYZ395eAejGGgZLrTKiz39e52oPwaU2XXGT7Q v/t1LqhE/oDEu6Yq4qkayBhBB9vVfJe336xx+JOIRJ9XJ+jPoMp46xgS+FoXqTOjGbza Eyq9k12vf/QOl8puZS6amRN3BQ403cU+sZlzi+gAGy4EKrswHU2gYLLUWo+VOk3SSDpb c66w== X-Gm-Message-State: APt69E3/kmBV+VuausyArBhxxO8yuIt5qStMBesvo0ILE+XRTcvwHlol x/XElLVl5uh0QSH8lBi57YNe3sMTHxJHBrdfAUx67iFMliZvsLb9866tdLeGQ7b7IMdJkTeGAzt fwh15I0iL9ASQmK/C2cLupE9rdRbIKjSM6cOUt8tlR30W0mWJAWr8PI/e3K9yKxUI3Q== X-Received: by 2002:a17:902:7009:: with SMTP id y9-v6mr20428161plk.217.1531141991643; Mon, 09 Jul 2018 06:13:11 -0700 (PDT) X-Google-Smtp-Source: AAOMgpf7FnLWTFm4u6owxHdRL/uCFCDWSyz/BrwTFOEAEdPP79ChtWemXkxdlafZXMj40KDiJ/HJ X-Received: by 2002:a17:902:7009:: with SMTP id y9-v6mr20428083plk.217.1531141990426; Mon, 09 Jul 2018 06:13:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531141990; cv=none; d=google.com; s=arc-20160816; b=MNKoI9UbLkDp1382XKicyNdYLZFNoDb1zL+IWDuQwY1Hk34MUrurWaIS7uiGJSCmNr DL0E+rpp+NK1WRwsVIRwSW2bMwSJpb7C2CvKN8qzWO888ZIao4vM2S1S4svJIXIlZrZ7 K/1KwEKXp7uUnkWqZO3rbqmHR/RUl2NWEwPFti+cdz+tLvTXT4fmrEgggwlue+/6Fm5a 2ubaSsaiqCXzbivESHGk+fDzE8LrYkgKqbo1SJfjlwmDitH5seUR252CrP2QEQRcjcFz ofFe2GhmLJw1E8PZpkr6MYlQgyIOTLitAWBl6v+n8aMRXUQn4EPBukVmaouEn/mHVfBd q9Xw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:references:mime-version:subject:message-id:reply-to:cc :to:from:date:dkim-signature:arc-authentication-results; bh=ScBtP543gBERUrR3jo7CdAeWjtxQFCYhDAr62jOo6Dg=; b=rIbKoy67n51ZOe4gJogOOm80NdaKaI5Re+5zlfxJU/bjfoK5GLwlubXijwtQoMyt/z AyWEkYWY79+ZYqd1ags/NOtRICk2EtP4bHFlC5w2H0laOMOOb5uOv4kmdCr/e4OmwgjV smlEASrfibhDjphlWBsTX2Eh79W38qQEo8Zjwvv8isrgDm3T937nffz0/wR2CHS89bLU 5rMMHo7HhRwEfzsSvSE6cyRHJHtqw91Iz5ryWh/9QzRt4RSlUiSwqQ3L2fBM//dK4hCQ kzYlQdECkX4BGX73nxbwOt+ekWr0JS98HTdqQ+J2Ry3fFlvd3Wi09fqVsx47eY2j5RE5 GG3A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@alibaba-inc.com header.s=default header.b="c+a/DDPS"; spf=pass (google.com: domain of xishi.qiuxishi@alibaba-inc.com designates 140.205.0.136 as permitted sender) smtp.mailfrom=xishi.qiuxishi@alibaba-inc.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba-inc.com Received: from out0-136.mail.aliyun.com (out0-136.mail.aliyun.com. [140.205.0.136]) by mx.google.com with ESMTPS id y17-v6si13677819plp.219.2018.07.09.06.13.09 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 09 Jul 2018 06:13:10 -0700 (PDT) Received-SPF: pass (google.com: domain of xishi.qiuxishi@alibaba-inc.com designates 140.205.0.136 as permitted sender) client-ip=140.205.0.136; Authentication-Results: mx.google.com; dkim=pass header.i=@alibaba-inc.com header.s=default header.b="c+a/DDPS"; spf=pass (google.com: domain of xishi.qiuxishi@alibaba-inc.com designates 140.205.0.136 as permitted sender) smtp.mailfrom=xishi.qiuxishi@alibaba-inc.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba-inc.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alibaba-inc.com; s=default; t=1531141988; h=Date:From:To:Message-ID:Subject:MIME-Version:Content-Type; bh=ScBtP543gBERUrR3jo7CdAeWjtxQFCYhDAr62jOo6Dg=; b=c+a/DDPS634CaQL7N8rIlSQ3N7yDqN79nN2ZtzZKNy4wlo89rH+kSz21P5UlpITgfoR2T1sD57GCpyCPEtAol6A+coWs+7Kw/55BarnX8WKcdFVoppPWtlZo6NnPWYFjLipULVgb8gGOwbnESo8dHkRtBOc/OcnZi2FxYM+6fMs= X-Alimail-AntiSpam: AC=PASS; BC=-1|-1; BR=01201311R731e4; CH=green; FP=0|-1|-1|-1|0|-1|-1|-1; HT=e02c03271; MF=xishi.qiuxishi@alibaba-inc.com; NM=1; PH=DW; RN=4; SR=0; TI=W4_5305839_v5ForDing_0A930E4A_1531141323835_o7001c97q; Received: from WS-web (xishi.qiuxishi@alibaba-inc.com[W4_5305839_v5ForDing_0A930E4A_1531141323835_o7001c97q]) by e01l07397.eu6 at Mon, 09 Jul 2018 21:13:07 +0800 Date: Mon, 09 Jul 2018 21:13:07 +0800 From: "=?UTF-8?B?6KOY56iA55+zKOeogOefsyk=?=" To: "Naoya Horiguchi" Cc: "linux-mm" , "linux-kernel" , "=?UTF-8?B?6ZmI5LmJ5YWo?=" Reply-To: "=?UTF-8?B?6KOY56iA55+zKOeogOefsyk=?=" Message-ID: Subject: =?UTF-8?B?5Zue5aSN77yaUmXvvJpbUkZDXSBhIHF1ZXN0aW9uIGFib3V0IHJldXNlIGh3cG9pc29uIHBh?= =?UTF-8?B?Z2UgaW4gc29mdF9vZmZsaW5lX3BhZ2UoKQ==?= X-Mailer: [Alimail-Mailagent revision 268][W4_5305839][v5ForDing][Chrome] MIME-Version: 1.0 References: <518e6b02-47ef-4ba8-ab98-8d807e2de7d5.xishi.qiuxishi@alibaba-inc.com>, <20180709102825.GA21147@hori1.linux.bs1.fc.nec.co.jp> In-Reply-To: <20180709102825.GA21147@hori1.linux.bs1.fc.nec.co.jp> x-aliyun-mail-creator: W4_5305839_v5ForDing_wgbTW96aWxsYS81LjAgKE1hY2ludG9zaDsgSW50ZWwgTWFjIE9TIFggMTBfMTJfNikgQXBwbGVXZWJLaXQvNTM3LjM2IChLSFRNTCwgbGlrZSBHZWNrbykgQ2hyb21lLzYyLjAuMzIwMi45NCBTYWZhcmkvNTM3LjM2IERpbmdUYWxrKDQuMy43LW1hY09TKSBudw==NT X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Hi Naoya, - does the same issue happens on soft offlining of normal pages? I thinks yes. We can do anything during get_any_page and set hwpoison flag. soft_offline_page get_any_page soft_offline_free_page SetPageHWPoison I search the key word of PageHWPoison, and these two paths maybe have issue. do_swap_page if (PageHWPoison(page)) - ret = VM_FAULT_HWPOISON; __do_fault if (unlikely(PageHWPoison(vmf->page))) - return VM_FAULT_HWPOISON; It may cause mce kill later. As you said allocation failure, I think we will get oom first. - does hard offling of free (huge) page have the similar issue? We can kill process in anytime, right? Thanks, Xishi Qiu On Mon, Jul 09, 2018 at 01:43:35PM +0800, 裘稀石(稀石) wrote: > Hi Naoya, > > Shall we fix this path too? It also will set hwpoison before > dissolve_free_huge_page(). > > soft_offline_huge_page > migrate_pages > unmap_and_move_huge_page > if (reason == MR_MEMORY_FAILURE && !test_set_page_hwpoison(hpage)) > dissolve_free_huge_page Thank you Xishi, I added it to the current (still draft) version below. I start feeling that current code is broken about behavior of PageHWPoison (at least) in soft offline. And this patch might not cover all of the issues. My current questions/concerns are: - does the same issue happens on soft offlining of normal pages? - does hard offling of free (huge) page have the similar issue? I'll try to clarify these next and will update the patch if necessary. I'm happy if I get some comment around these. Thanks, Naoya Horiguchi --- From 9ce4df899f4c859001571958be6a281cdaf5a58f Mon Sep 17 00:00:00 2001 From: Naoya Horiguchi Date: Mon, 9 Jul 2018 13:07:46 +0900 Subject: [PATCH] mm: fix race on soft-offlining free huge pages There's a race condition between soft offline and hugetlb_fault which causes unexpected process killing and/or hugetlb allocation failure. The process killing is caused by the following flow: CPU 0 CPU 1 CPU 2 soft offline get_any_page // find the hugetlb is free mmap a hugetlb file page fault ... hugetlb_fault hugetlb_no_page alloc_huge_page // succeed soft_offline_free_page // set hwpoison flag mmap the hugetlb file page fault ... hugetlb_fault hugetlb_no_page find_lock_page return VM_FAULT_HWPOISON mm_fault_error do_sigbus // kill the process The hugetlb allocation failure comes from the following flow: CPU 0 CPU 1 mmap a hugetlb file // reserve all free page but don't fault-in soft offline get_any_page // find the hugetlb is free soft_offline_free_page // set hwpoison flag dissolve_free_huge_page // fail because all free hugepages are reserved page fault ... hugetlb_fault hugetlb_no_page alloc_huge_page ... dequeue_huge_page_node_exact // ignore hwpoisoned hugepage // and finally fail due to no-mem The root cause of this is that current soft-offline code is written based on an assumption that PageHWPoison flag should beset at first to avoid accessing the corrupted data. This makes sense for memory_failure() or hard offline, but does not for soft offline because soft offline is not about corrected error and is safe from data lost. This patch changes soft offline semantics where it sets PageHWPoison flag only after containment of the error page completes succesfully. Reported-by: Xishi Qiu Suggested-by: Xishi Qiu Signed-off-by: Naoya Horiguchi --- mm/hugetlb.c | 11 +++++------ mm/memory-failure.c | 13 +++++++------ mm/migrate.c | 2 -- 3 files changed, 12 insertions(+), 14 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d34225c1cb5b..3c9ce4c05f1b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1479,22 +1479,20 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, /* * Dissolve a given free hugepage into free buddy pages. This function does * nothing for in-use (including surplus) hugepages. Returns -EBUSY if the - * number of free hugepages would be reduced below the number of reserved - * hugepages. + * dissolution fails because a give page is not a free hugepage, or because + * free hugepages are fully reserved. */ int dissolve_free_huge_page(struct page *page) { - int rc = 0; + int rc = -EBUSY; spin_lock(&hugetlb_lock); if (PageHuge(page) && !page_count(page)) { struct page *head = compound_head(page); struct hstate *h = page_hstate(head); int nid = page_to_nid(head); - if (h->free_huge_pages - h->resv_huge_pages == 0) { - rc = -EBUSY; + if (h->free_huge_pages - h->resv_huge_pages == 0) goto out; - } /* * Move PageHWPoison flag from head page to the raw error page, * which makes any subpages rather than the error page reusable. @@ -1508,6 +1506,7 @@ int dissolve_free_huge_page(struct page *page) h->free_huge_pages_node[nid]--; h->max_huge_pages--; update_and_free_page(h, head); + rc = 0; } out: spin_unlock(&hugetlb_lock); diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 9d142b9b86dc..7a519d947408 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1598,8 +1598,9 @@ static int soft_offline_huge_page(struct page *page, int flags) if (ret > 0) ret = -EIO; } else { - if (PageHuge(page)) - dissolve_free_huge_page(page); + ret = dissolve_free_huge_page(page); + if (!ret) + num_poisoned_pages_inc(); } return ret; } @@ -1715,13 +1716,13 @@ static int soft_offline_in_use_page(struct page *page, int flags) static void soft_offline_free_page(struct page *page) { + int rc = 0; struct page *head = compound_head(page); - if (!TestSetPageHWPoison(head)) { + if (PageHuge(head)) + rc = dissolve_free_huge_page(page); + if (!rc && !TestSetPageHWPoison(page)) num_poisoned_pages_inc(); - if (PageHuge(head)) - dissolve_free_huge_page(page); - } } /** diff --git a/mm/migrate.c b/mm/migrate.c index 198af4289f9b..3ae213b799a1 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1318,8 +1318,6 @@ static int unmap_and_move_huge_page(new_page_t get_new_page, out: if (rc != -EAGAIN) putback_active_hugepage(hpage); - if (reason == MR_MEMORY_FAILURE && !test_set_page_hwpoison(hpage)) - num_poisoned_pages_inc(); /* * If migration was not successful and there's a freeing callback, use