From patchwork Mon Jul 9 12:42:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?UTF-8?B?6KOY56iA55+zKOeogOefsyk=?= X-Patchwork-Id: 10514531 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A9768600CA for ; Mon, 9 Jul 2018 12:42:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 97E7028409 for ; Mon, 9 Jul 2018 12:42:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8B98728B58; Mon, 9 Jul 2018 12:42:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FROM_EXCESS_BASE64, HTML_MESSAGE, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 408C928409 for ; Mon, 9 Jul 2018 12:42:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1B52B6B02C9; Mon, 9 Jul 2018 08:42:35 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 13EEF6B02CC; Mon, 9 Jul 2018 08:42:35 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F23026B02CD; Mon, 9 Jul 2018 08:42:34 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf0-f197.google.com (mail-pf0-f197.google.com [209.85.192.197]) by kanga.kvack.org (Postfix) with ESMTP id 9DB026B02C9 for ; Mon, 9 Jul 2018 08:42:34 -0400 (EDT) Received: by mail-pf0-f197.google.com with SMTP id b5-v6so11742148pfi.5 for ; Mon, 09 Jul 2018 05:42:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:date:from:to:cc:reply-to :message-id:subject:mime-version; bh=xPapgrV6D1/jqxNj/EujKg5+ycdj3EgSURAyv1FgvIw=; b=EVh17gG1dyZWOjiiAeHWcbEZ/KPuQgSCyCq2my9VxmroObocuqKKoex8lCGYkT4qlT zEqq3Es60b81kf6RPxYFnrgtlAWRVS5IlI0U2Lp+EE5udQ08sxHzFWRGzokFkGa9cwdj rkcAJaISZ8Rr+ppnOCxIvAdx5sT5xlkFB1QsoYZv5ynDTCOuhEwltaQ6tEpr3lZ7bhng jYZj2W/6NWCiBhJtZ8tV8ANK4t3qFEcY9vdV/hmheQd74pa3nYrOttu3UizfqMMj73Wz x7F6L6DaMTNeun+0Woms5oCNu8uQIu3mWYY2VctHIUYaRqpLsO2kiz7gQ82tEleesjWA qSlg== X-Gm-Message-State: APt69E0mMWGEbQ+tnz1qmFtP+LWSJ1UJIb0koJ8b3MAzlp/i5rRLvVKx HUeYyPw01kwM1G2TpeEQqc4egaDXBcry/aVYv35kY/l8cPJWNby+6pSsun2Apfa21u2OHV0kgiW EpYrtJ+NywYVWOljfNZO0OedQ+MMYJSEyr++TzbctVlIxeMZJpCbw376k6EOm20Lxdg== X-Received: by 2002:a62:ec41:: with SMTP id k62-v6mr21046244pfh.206.1531140154295; Mon, 09 Jul 2018 05:42:34 -0700 (PDT) X-Google-Smtp-Source: AAOMgpcdTD+BhrZym2VQcEXJGYzcp8Xqb9bcTquokLYlZ1KYnSrhIjcNOUlm+M39WIwlgNZrSWXY X-Received: by 2002:a62:ec41:: with SMTP id k62-v6mr21046192pfh.206.1531140153213; Mon, 09 Jul 2018 05:42:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531140153; cv=none; d=google.com; s=arc-20160816; b=f3CA8XUwjMeqgZoBjQLoOCeCEX5ZbOE2A++H81lmobOoc4PKuZkBT+ngQaunSo7QtZ McRdPKcGzvMO5zcjTJ0waHrfvS1bsYIPJ2F6ay+IvJ735+lur1kTLEJcAMfeGXTNjICm vpaH7zAxm4RdWLB2ZybjfvB/m525BlYNaOgPH7/ccJR6n7LfSSHahCXdUOBHPw2pmTXj saq6OCOHd8GyOKI8GEvESdy5B6bJfMDC4ITopr2MNKTrJLbLMhAwikSOfVExsO0U5UNV v7LDkU9/wkESMZGvaBix0gtwvq4W6ewiZ7NIGTOPywjc2MbTpwKX2zO7LOzKhl6Q9pV5 5tnQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:subject:message-id:reply-to:cc:to:from:date :dkim-signature:arc-authentication-results; bh=xPapgrV6D1/jqxNj/EujKg5+ycdj3EgSURAyv1FgvIw=; b=m3V+QlPUPwstFFfTUEr1VLExSdpRMyp2kuN8qCK6qu3UlTWm0+cAYJ36ZguyolIX27 +3MJoD+tvf1hafpJqe6oSYa/KIWSy8vsTBPMPt7uUQPtM65E8+pJueZ9s7W7KuS657rr VflpvIOAcHOcQWrxxpaDGZvoAa4P8t0KZ/K88pJgwVsboSPA1V9JYlx28jml9Lu+aY98 XI/bWPC2bnH/hXiwUOcQEpJubYvxi1I9RcCDQov1TZXwCLcBoBknc+IH3P33nROWuhPI B7mCkMj8QCw8vsh/e8qQinuwGyvNqRDXdi0v8H3TojhmrjwfM+hgFiDZfM1HtikxH7JM hj+A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@alibaba-inc.com header.s=default header.b=qlSx8gew; spf=pass (google.com: domain of xishi.qiuxishi@alibaba-inc.com designates 140.205.0.155 as permitted sender) smtp.mailfrom=xishi.qiuxishi@alibaba-inc.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba-inc.com Received: from out0-155.mail.aliyun.com (out0-155.mail.aliyun.com. [140.205.0.155]) by mx.google.com with ESMTPS id a24-v6si13300810pgv.527.2018.07.09.05.42.32 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 09 Jul 2018 05:42:33 -0700 (PDT) Received-SPF: pass (google.com: domain of xishi.qiuxishi@alibaba-inc.com designates 140.205.0.155 as permitted sender) client-ip=140.205.0.155; Authentication-Results: mx.google.com; dkim=pass header.i=@alibaba-inc.com header.s=default header.b=qlSx8gew; spf=pass (google.com: domain of xishi.qiuxishi@alibaba-inc.com designates 140.205.0.155 as permitted sender) smtp.mailfrom=xishi.qiuxishi@alibaba-inc.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba-inc.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alibaba-inc.com; s=default; t=1531140151; h=Date:From:To:Message-ID:Subject:MIME-Version:Content-Type; bh=xPapgrV6D1/jqxNj/EujKg5+ycdj3EgSURAyv1FgvIw=; b=qlSx8gewyQqFHVUiwROJ0TR0An2AygtoLCVQ+nK4xkOVkgttPG1JOJWwr8VHAW0TBvDq1pYaA227o0QCegE5aFt2+XJtHP+xAgTbDRL9nAEuysPz6u8sAWyjdwtuckj1xS0ESZh95JENvSf98H2/WCBCmMYvNaIplGyPkQ+C7iE= X-Alimail-AntiSpam: AC=PASS; BC=-1|-1; BR=01201311R151e4; CH=green; FP=0|-1|-1|-1|0|-1|-1|-1; HT=e02c03302; MF=xishi.qiuxishi@alibaba-inc.com; NM=1; PH=DW; RN=4; SR=0; TI=W4_5305839_v5ForDing_0A9326EA_1531140078457_o7001c305i; Received: from WS-web (xishi.qiuxishi@alibaba-inc.com[W4_5305839_v5ForDing_0A9326EA_1531140078457_o7001c305i]) by e01l07392.eu6 at Mon, 09 Jul 2018 20:42:28 +0800 Date: Mon, 09 Jul 2018 20:42:28 +0800 From: "=?UTF-8?B?6KOY56iA55+zKOeogOefsyk=?=" To: "Naoya Horiguchi" Cc: "linux-mm" , "linux-kernel" , "=?UTF-8?B?6ZmI5LmJ5YWo?=" Reply-To: "=?UTF-8?B?6KOY56iA55+zKOeogOefsyk=?=" Message-ID: <7049d18e-5ce1-4c1a-9391-f9b866d79c93.xishi.qiuxishi@alibaba-inc.com> Subject: =?UTF-8?B?UmXvvJpbUkZDXSBhIHF1ZXN0aW9uIGFib3V0IHJldXNlIGh3cG9pc29uIHBhZ2UgaW4gc29m?= =?UTF-8?B?dF9vZmZsaW5lX3BhZ2UoKQ==?= X-Mailer: [Alimail-Mailagent revision 268][W4_5305839][v5ForDing][Chrome] MIME-Version: 1.0 x-aliyun-mail-creator: W4_5305839_v5ForDing_wgbTW96aWxsYS81LjAgKE1hY2ludG9zaDsgSW50ZWwgTWFjIE9TIFggMTBfMTJfNikgQXBwbGVXZWJLaXQvNTM3LjM2IChLSFRNTCwgbGlrZSBHZWNrbykgQ2hyb21lLzYyLjAuMzIwMi45NCBTYWZhcmkvNTM3LjM2IERpbmdUYWxrKDQuMy43LW1hY09TKSBudw==NT X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Hi Naoya, This patch looks good for me, at least in soft offline hugetlb. Thanks, Xishi Qiu On Mon, Jul 09, 2018 at 01:43:35PM +0800, 裘稀石(稀石) wrote: > Hi Naoya, > > Shall we fix this path too? It also will set hwpoison before > dissolve_free_huge_page(). > > soft_offline_huge_page > migrate_pages > unmap_and_move_huge_page > if (reason == MR_MEMORY_FAILURE && !test_set_page_hwpoison(hpage)) > dissolve_free_huge_page Thank you Xishi, I added it to the current (still draft) version below. I start feeling that current code is broken about behavior of PageHWPoison (at least) in soft offline. And this patch might not cover all of the issues. My current questions/concerns are: - does the same issue happens on soft offlining of normal pages? - does hard offling of free (huge) page have the similar issue? I'll try to clarify these next and will update the patch if necessary. I'm happy if I get some comment around these. Thanks, Naoya Horiguchi --- From 9ce4df899f4c859001571958be6a281cdaf5a58f Mon Sep 17 00:00:00 2001 From: Naoya Horiguchi Date: Mon, 9 Jul 2018 13:07:46 +0900 Subject: [PATCH] mm: fix race on soft-offlining free huge pages There's a race condition between soft offline and hugetlb_fault which causes unexpected process killing and/or hugetlb allocation failure. The process killing is caused by the following flow: CPU 0 CPU 1 CPU 2 soft offline get_any_page // find the hugetlb is free mmap a hugetlb file page fault ... hugetlb_fault hugetlb_no_page alloc_huge_page // succeed soft_offline_free_page // set hwpoison flag mmap the hugetlb file page fault ... hugetlb_fault hugetlb_no_page find_lock_page return VM_FAULT_HWPOISON mm_fault_error do_sigbus // kill the process The hugetlb allocation failure comes from the following flow: CPU 0 CPU 1 mmap a hugetlb file // reserve all free page but don't fault-in soft offline get_any_page // find the hugetlb is free soft_offline_free_page // set hwpoison flag dissolve_free_huge_page // fail because all free hugepages are reserved page fault ... hugetlb_fault hugetlb_no_page alloc_huge_page ... dequeue_huge_page_node_exact // ignore hwpoisoned hugepage // and finally fail due to no-mem The root cause of this is that current soft-offline code is written based on an assumption that PageHWPoison flag should beset at first to avoid accessing the corrupted data. This makes sense for memory_failure() or hard offline, but does not for soft offline because soft offline is not about corrected error and is safe from data lost. This patch changes soft offline semantics where it sets PageHWPoison flag only after containment of the error page completes succesfully. Reported-by: Xishi Qiu Suggested-by: Xishi Qiu Signed-off-by: Naoya Horiguchi --- mm/hugetlb.c | 11 +++++------ mm/memory-failure.c | 13 +++++++------ mm/migrate.c | 2 -- 3 files changed, 12 insertions(+), 14 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d34225c1cb5b..3c9ce4c05f1b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1479,22 +1479,20 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, /* * Dissolve a given free hugepage into free buddy pages. This function does * nothing for in-use (including surplus) hugepages. Returns -EBUSY if the - * number of free hugepages would be reduced below the number of reserved - * hugepages. + * dissolution fails because a give page is not a free hugepage, or because + * free hugepages are fully reserved. */ int dissolve_free_huge_page(struct page *page) { - int rc = 0; + int rc = -EBUSY; spin_lock(&hugetlb_lock); if (PageHuge(page) && !page_count(page)) { struct page *head = compound_head(page); struct hstate *h = page_hstate(head); int nid = page_to_nid(head); - if (h->free_huge_pages - h->resv_huge_pages == 0) { - rc = -EBUSY; + if (h->free_huge_pages - h->resv_huge_pages == 0) goto out; - } /* * Move PageHWPoison flag from head page to the raw error page, * which makes any subpages rather than the error page reusable. @@ -1508,6 +1506,7 @@ int dissolve_free_huge_page(struct page *page) h->free_huge_pages_node[nid]--; h->max_huge_pages--; update_and_free_page(h, head); + rc = 0; } out: spin_unlock(&hugetlb_lock); diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 9d142b9b86dc..7a519d947408 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1598,8 +1598,9 @@ static int soft_offline_huge_page(struct page *page, int flags) if (ret > 0) ret = -EIO; } else { - if (PageHuge(page)) - dissolve_free_huge_page(page); + ret = dissolve_free_huge_page(page); + if (!ret) + num_poisoned_pages_inc(); } return ret; } @@ -1715,13 +1716,13 @@ static int soft_offline_in_use_page(struct page *page, int flags) static void soft_offline_free_page(struct page *page) { + int rc = 0; struct page *head = compound_head(page); - if (!TestSetPageHWPoison(head)) { + if (PageHuge(head)) + rc = dissolve_free_huge_page(page); + if (!rc && !TestSetPageHWPoison(page)) num_poisoned_pages_inc(); - if (PageHuge(head)) - dissolve_free_huge_page(page); - } } /** diff --git a/mm/migrate.c b/mm/migrate.c index 198af4289f9b..3ae213b799a1 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1318,8 +1318,6 @@ static int unmap_and_move_huge_page(new_page_t get_new_page, out: if (rc != -EAGAIN) putback_active_hugepage(hpage); - if (reason == MR_MEMORY_FAILURE && !test_set_page_hwpoison(hpage)) - num_poisoned_pages_inc(); /* * If migration was not successful and there's a freeing callback, use