From patchwork Tue Jul 17 05:32:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naoya Horiguchi X-Patchwork-Id: 10528129 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 50912600D0 for ; Tue, 17 Jul 2018 05:32:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3C47A289BA for ; Tue, 17 Jul 2018 05:32:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3030328A91; Tue, 17 Jul 2018 05:32:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6229B289BA for ; Tue, 17 Jul 2018 05:32:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8C3766B0008; Tue, 17 Jul 2018 01:32:53 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 824136B000C; Tue, 17 Jul 2018 01:32:53 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A37B6B000D; Tue, 17 Jul 2018 01:32:53 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f70.google.com (mail-pl0-f70.google.com [209.85.160.70]) by kanga.kvack.org (Postfix) with ESMTP id 132B96B000C for ; Tue, 17 Jul 2018 01:32:53 -0400 (EDT) Received: by mail-pl0-f70.google.com with SMTP id w1-v6so25864662plq.8 for ; Mon, 16 Jul 2018 22:32:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:sender:from:to:cc:subject:date :message-id:in-reply-to:references; bh=LA8Tezz09N+DfkcA4tB6goY21unWtbHDeihcHLz6flA=; b=R4PDrGAnzRDfnbMkEfV/Yvegj1Z2IwNBGWNjbrkje5QB1xhkyfms+wlpz/GeufmsDN M4c1OT9dazOqxSaQrz00ZfnOOyVv5EguiICrnovoOCcKsyRvmhXINF6xedR7LFbxDTXa EtFDflFmTcA4m0y/o8MBAMvYMDkBMRPa4orAtGPWf/WjIhAa56lBMYv1kB/A52hEakHE rROMQNqSiqllsrGDjhIaDvjYg8djr0usvIWNFbKRDtkJw5Om/jVO/slNFraYXzOVDwTj FtuBCB6q0PvzSvdByWOJPaqjKaG/X0bJbBNPSdNULkcQYeVOTCUazoBNaOl+v8EjuqZh j0RQ== X-Gm-Message-State: AOUpUlHwwo0LDG+YFbRnSA2khBluvV4uEtNo2oJbw6ULlMj2KLeSFfsX VohziklG7B92VMnVi/4bm4FqDJkNG/dz17sQeEK4ePaSBf5jEq81N1YV59QUZFPG5wc+/fLKYfx /nB/pwlB/vpMYjntqwwys0kkg8qaNN40y0rGYfEzG2eKU8R95A3vq9g86ZHqBGKjVLuUfEUAu8c vDBxBon0eR8oeOOONIWMDJ/wN7JIBNjTuj7pO3XsgbSzPsp/oeOeVP5mlVSPp83GuYSxQwi2a2h wnGxcydeqlumFo5UD3wKmb+UAgjumgM0Rmdiou4IkDgOk6oXMUtXgF2mPufEQfmVct+ndygEeZ5 BbUFtVg7pIjLCiEG0K8k8b14Xg5Em03C7aH1yd518v8SkMXoLXOCgcSkzsxTj7vcXG0u0nqUFA= = X-Received: by 2002:a63:460d:: with SMTP id t13-v6mr169721pga.201.1531805572722; Mon, 16 Jul 2018 22:32:52 -0700 (PDT) X-Received: by 2002:a63:460d:: with SMTP id t13-v6mr169671pga.201.1531805571676; Mon, 16 Jul 2018 22:32:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531805571; cv=none; d=google.com; s=arc-20160816; b=yfSKXXL7yG8vGBwUT7bp7j+Uf7Tr8e/tsDYcEDn86n+IaetwQAfZpqqF4ygc2RMN4n /iiJThoBQjUs4HQ98j9vbEGHtjaXJ6y0QY0d103BY8//KVFZnVlTtL/9+TPK4qJwuP9H mu9V1Ju1I5nDENlSAgcGBF8jSsDlwqtVtlM373GJfa+6i1qF1/M288TaAWdT+2sqefyo iNKbE4oEO5GOycyljAni0a1LWrvq5vwnvjoMJ48CGe94aYuowq7lpJVR/MPAeYoqHqb+ JB+uXjYM7IKpIPsIv7xVcMfbdczH1171SP/CTKmPqyiFDsvVMiqzdu9shHa1hoUoxRto pMQg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from:sender :dkim-signature:arc-authentication-results; bh=LA8Tezz09N+DfkcA4tB6goY21unWtbHDeihcHLz6flA=; b=uWPTpfY4aAW1CKINRj9w4RKgT4KVud2ToErIQ6un6ymDrLI6RvJLTl6Ioum7+oUZwz N1mWEBKi7UUywz95GVIrO1U+GwFma5f1Rtf+Wy+lDNJGDwLtUOAjXGd/SQw9Ke7CJuA1 i1HmayFwmzDxXNNUfRM0C2mQjw0C04sETKJjXEwheHucOVDRK8vGJNnpLnAfO1y+Ixxa oU7pa+xy8pVPNxQlIo1uNnqISo4SNulEkdFnEiy3iHAleUrEjwSlA7bEEiDCKu8n0E1N m4SHTBZ+NFH4PXGAfnBoWLZfFzBBlV7HfWhfCJBRp+nP2VVDSd47eq7Uq1gCMgoA5LGO 8OBg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="G8OFWLz/"; spf=pass (google.com: domain of nao.horiguchi@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=nao.horiguchi@gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id g93-v6sor21330plb.109.2018.07.16.22.32.51 for (Google Transport Security); Mon, 16 Jul 2018 22:32:51 -0700 (PDT) Received-SPF: pass (google.com: domain of nao.horiguchi@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="G8OFWLz/"; spf=pass (google.com: domain of nao.horiguchi@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=nao.horiguchi@gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=LA8Tezz09N+DfkcA4tB6goY21unWtbHDeihcHLz6flA=; b=G8OFWLz/MAhFjWUKmE4yfa4s0LZ7UM/SHS0BGU6PMer5oZW/HZgpzxiRf0O6UvtndP MCAtHAPVFCYTYeOmlVQWap3loE9GIlc3zbyc1uxCRo5XWlIeh+b3NIUpXsWldmg0lAzL yUCegzbaPDDmcIjInp/Zg8OWAZrIr8ikrmtoxzzCZdAgJkvzVbJ3E8QWHr/5GXf7bHvq AmYEzXm9/k9zqHASEISgufDjBqKykSze8VK4x0mIhXUN6seOy9zWUV1NDz5yeiT9h8FU tOCM+40uGteXrQv4n1k5QWe66lYaqx74lz1VCaIWbnGUjriD0r3WAqw7XjPyFMQM9C+3 FQBA== X-Google-Smtp-Source: AAOMgpdNN/ApMfNme1hcP7i4v7BLUQIo8h6k4Lf3/pHlI2aJNS7p9njV/Y6VdraDI6HHzgD6lgX/6g== X-Received: by 2002:a17:902:b944:: with SMTP id h4-v6mr177743pls.157.1531805570887; Mon, 16 Jul 2018 22:32:50 -0700 (PDT) Received: from www9186uo.sakura.ne.jp (www9186uo.sakura.ne.jp. [153.121.56.200]) by smtp.gmail.com with ESMTPSA id y85-v6sm156220pfa.170.2018.07.16.22.32.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 16 Jul 2018 22:32:49 -0700 (PDT) From: Naoya Horiguchi To: linux-mm@kvack.org Cc: Michal Hocko , Andrew Morton , xishi.qiuxishi@alibaba-inc.com, zy.zhengyi@alibaba-inc.com, linux-kernel@vger.kernel.org Subject: [PATCH v2 2/2] mm: soft-offline: close the race against page allocation Date: Tue, 17 Jul 2018 14:32:32 +0900 Message-Id: <1531805552-19547-3-git-send-email-n-horiguchi@ah.jp.nec.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1531805552-19547-1-git-send-email-n-horiguchi@ah.jp.nec.com> References: <1531805552-19547-1-git-send-email-n-horiguchi@ah.jp.nec.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP A process can be killed with SIGBUS(BUS_MCEERR_AR) when it tries to allocate a page that was just freed on the way of soft-offline. This is undesirable because soft-offline (which is about corrected error) is less aggressive than hard-offline (which is about uncorrected error), and we can make soft-offline fail and keep using the page for good reason like "system is busy." Two main changes of this patch are: - setting migrate type of the target page to MIGRATE_ISOLATE. As done in free_unref_page_commit(), this makes kernel bypass pcplist when freeing the page. So we can assume that the page is in freelist just after put_page() returns, - setting PG_hwpoison on free page under zone->lock which protects freelists, so this allows us to avoid setting PG_hwpoison on a page that is decided to be allocated soon. Reported-by: Xishi Qiu Signed-off-by: Naoya Horiguchi --- changelog v1->v2: - updated comment on set_hwpoison_free_buddy_page(), - moved calling set_hwpoison_free_buddy_page() from mm/migrate.c to mm/memory-failure.c, which is necessary to check the return code of set_hwpoison_free_buddy_page(). --- include/linux/page-flags.h | 5 +++++ include/linux/swapops.h | 10 ---------- mm/memory-failure.c | 35 +++++++++++++++++++++++++++++------ mm/migrate.c | 9 --------- mm/page_alloc.c | 30 ++++++++++++++++++++++++++++++ 5 files changed, 64 insertions(+), 25 deletions(-) diff --git v4.18-rc4-mmotm-2018-07-10-16-50/include/linux/page-flags.h v4.18-rc4-mmotm-2018-07-10-16-50_patched/include/linux/page-flags.h index 901943e..74bee8c 100644 --- v4.18-rc4-mmotm-2018-07-10-16-50/include/linux/page-flags.h +++ v4.18-rc4-mmotm-2018-07-10-16-50_patched/include/linux/page-flags.h @@ -369,8 +369,13 @@ PAGEFLAG_FALSE(Uncached) PAGEFLAG(HWPoison, hwpoison, PF_ANY) TESTSCFLAG(HWPoison, hwpoison, PF_ANY) #define __PG_HWPOISON (1UL << PG_hwpoison) +extern bool set_hwpoison_free_buddy_page(struct page *page); #else PAGEFLAG_FALSE(HWPoison) +static inline bool set_hwpoison_free_buddy_page(struct page *page) +{ + return 0; +} #define __PG_HWPOISON 0 #endif diff --git v4.18-rc4-mmotm-2018-07-10-16-50/include/linux/swapops.h v4.18-rc4-mmotm-2018-07-10-16-50_patched/include/linux/swapops.h index 9c0eb4d..fe8e08b 100644 --- v4.18-rc4-mmotm-2018-07-10-16-50/include/linux/swapops.h +++ v4.18-rc4-mmotm-2018-07-10-16-50_patched/include/linux/swapops.h @@ -335,11 +335,6 @@ static inline int is_hwpoison_entry(swp_entry_t entry) return swp_type(entry) == SWP_HWPOISON; } -static inline bool test_set_page_hwpoison(struct page *page) -{ - return TestSetPageHWPoison(page); -} - static inline void num_poisoned_pages_inc(void) { atomic_long_inc(&num_poisoned_pages); @@ -362,11 +357,6 @@ static inline int is_hwpoison_entry(swp_entry_t swp) return 0; } -static inline bool test_set_page_hwpoison(struct page *page) -{ - return false; -} - static inline void num_poisoned_pages_inc(void) { } diff --git v4.18-rc4-mmotm-2018-07-10-16-50/mm/memory-failure.c v4.18-rc4-mmotm-2018-07-10-16-50_patched/mm/memory-failure.c index 9b77f85..936d0e7 100644 --- v4.18-rc4-mmotm-2018-07-10-16-50/mm/memory-failure.c +++ v4.18-rc4-mmotm-2018-07-10-16-50_patched/mm/memory-failure.c @@ -57,6 +57,7 @@ #include #include #include +#include #include "internal.h" #include "ras/ras_event.h" @@ -1609,8 +1610,10 @@ static int soft_offline_huge_page(struct page *page, int flags) */ ret = dissolve_free_huge_page(page); if (!ret) { - if (!TestSetPageHWPoison(page)) + if (set_hwpoison_free_buddy_page(page)) num_poisoned_pages_inc(); + else + ret = -EBUSY; } } return ret; @@ -1688,6 +1691,11 @@ static int __soft_offline_page(struct page *page, int flags) pfn, ret, page->flags, &page->flags); if (ret > 0) ret = -EIO; + } else { + if (set_hwpoison_free_buddy_page(page)) + num_poisoned_pages_inc(); + else + ret = -EBUSY; } } else { pr_info("soft offline: %#lx: isolation failed: %d, page count %d, type %lx (%pGp)\n", @@ -1699,6 +1707,7 @@ static int __soft_offline_page(struct page *page, int flags) static int soft_offline_in_use_page(struct page *page, int flags) { int ret; + int mt; struct page *hpage = compound_head(page); if (!PageHuge(page) && PageTransHuge(hpage)) { @@ -1717,23 +1726,37 @@ static int soft_offline_in_use_page(struct page *page, int flags) put_hwpoison_page(hpage); } + /* + * Setting MIGRATE_ISOLATE here ensures that the page will be linked + * to free list immediately (not via pcplist) when released after + * successful page migration. Otherwise we can't guarantee that the + * page is really free after put_page() returns, so + * set_hwpoison_free_buddy_page() highly likely fails. + */ + mt = get_pageblock_migratetype(page); + set_pageblock_migratetype(page, MIGRATE_ISOLATE); if (PageHuge(page)) ret = soft_offline_huge_page(page, flags); else ret = __soft_offline_page(page, flags); - + set_pageblock_migratetype(page, mt); return ret; } -static void soft_offline_free_page(struct page *page) +static int soft_offline_free_page(struct page *page) { int rc = 0; struct page *head = compound_head(page); if (PageHuge(head)) rc = dissolve_free_huge_page(page); - if (!rc && !TestSetPageHWPoison(page)) - num_poisoned_pages_inc(); + if (!rc) { + if (set_hwpoison_free_buddy_page(page)) + num_poisoned_pages_inc(); + else + rc = -EBUSY; + } + return rc; } /** @@ -1777,7 +1800,7 @@ int soft_offline_page(struct page *page, int flags) if (ret > 0) ret = soft_offline_in_use_page(page, flags); else if (ret == 0) - soft_offline_free_page(page); + ret = soft_offline_free_page(page); return ret; } diff --git v4.18-rc4-mmotm-2018-07-10-16-50/mm/migrate.c v4.18-rc4-mmotm-2018-07-10-16-50_patched/mm/migrate.c index 3ae213b..4fd0fe0 100644 --- v4.18-rc4-mmotm-2018-07-10-16-50/mm/migrate.c +++ v4.18-rc4-mmotm-2018-07-10-16-50_patched/mm/migrate.c @@ -1193,15 +1193,6 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, */ if (rc == MIGRATEPAGE_SUCCESS) { put_page(page); - if (reason == MR_MEMORY_FAILURE) { - /* - * Set PG_HWPoison on just freed page - * intentionally. Although it's rather weird, - * it's how HWPoison flag works at the moment. - */ - if (!test_set_page_hwpoison(page)) - num_poisoned_pages_inc(); - } } else { if (rc != -EAGAIN) { if (likely(!__PageMovable(page))) { diff --git v4.18-rc4-mmotm-2018-07-10-16-50/mm/page_alloc.c v4.18-rc4-mmotm-2018-07-10-16-50_patched/mm/page_alloc.c index 607deff..4058b7e 100644 --- v4.18-rc4-mmotm-2018-07-10-16-50/mm/page_alloc.c +++ v4.18-rc4-mmotm-2018-07-10-16-50_patched/mm/page_alloc.c @@ -8027,3 +8027,33 @@ bool is_free_buddy_page(struct page *page) return order < MAX_ORDER; } + +#ifdef CONFIG_MEMORY_FAILURE +/* + * Set PG_hwpoison flag if a given page is confirmed to be a free page. This + * test is performed under the zone lock to prevent a race against page + * allocation. + */ +bool set_hwpoison_free_buddy_page(struct page *page) +{ + struct zone *zone = page_zone(page); + unsigned long pfn = page_to_pfn(page); + unsigned long flags; + unsigned int order; + bool hwpoisoned = false; + + spin_lock_irqsave(&zone->lock, flags); + for (order = 0; order < MAX_ORDER; order++) { + struct page *page_head = page - (pfn & ((1 << order) - 1)); + + if (PageBuddy(page_head) && page_order(page_head) >= order) { + if (!TestSetPageHWPoison(page)) + hwpoisoned = true; + break; + } + } + spin_unlock_irqrestore(&zone->lock, flags); + + return hwpoisoned; +} +#endif