From patchwork Thu Aug 6 18:49:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naoya Horiguchi X-Patchwork-Id: 11704099 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E10CC913 for ; Thu, 6 Aug 2020 18:49:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1304F2224D for ; Thu, 6 Aug 2020 18:49:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Dag3wQDF" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1304F2224D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A7AFF6B000E; Thu, 6 Aug 2020 14:49:56 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9DFA46B0010; Thu, 6 Aug 2020 14:49:56 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 87E6D8D0001; Thu, 6 Aug 2020 14:49:56 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0090.hostedemail.com [216.40.44.90]) by kanga.kvack.org (Postfix) with ESMTP id 6A4286B000E for ; Thu, 6 Aug 2020 14:49:56 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2ED5C824934B for ; Thu, 6 Aug 2020 18:49:56 +0000 (UTC) X-FDA: 77121033192.21.kitty14_551709026fb9 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 03F16180442C2 for ; Thu, 6 Aug 2020 18:49:55 +0000 (UTC) X-Spam-Summary: 1,0,0,017588505e84c85e,d41d8cd98f00b204,nao.horiguchi@gmail.com,,RULES_HIT:1:2:41:69:355:379:541:800:960:966:973:988:989:1260:1345:1359:1437:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2693:2731:2899:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4052:4250:4321:4385:4605:5007:6119:6261:6653:6742:7576:7875:7903:8660:8957:9010:9413:9592:9707:10004:11026:11232:11473:11658:11914:12043:12257:12296:12297:12438:12517:12519:12555:12679:12683:12740:12895:12986:13148:13230:14096:14394:21060:21080:21094:21220:21323:21324:21444:21451:21627:21666:21740:21795:21939:21987:21990:22119:30012:30051:30054:30070:30083:30091,0,RBL:209.85.214.193:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100;04yfg7nqjju9ju9isyai8oocwd5stopqakqwcukeusoz53iiukz7n14jgi765sh.5664hymf4o3jezx39tx5eka4aw3bx58ogt99s9y74ty4udxqwiycj4zaz7u5ehs.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk ,SPF:fp, X-HE-Tag: kitty14_551709026fb9 X-Filterd-Recvd-Size: 12727 Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Thu, 6 Aug 2020 18:49:55 +0000 (UTC) Received: by mail-pl1-f193.google.com with SMTP id q17so28225269pls.9 for ; Thu, 06 Aug 2020 11:49:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=iOnU5DPTFBCJZrA4qYHBhE1N2Tyk26DBmuGxhFL9n1g=; b=Dag3wQDFd+V8irlHhdwBYMJYcyJMuI37lHSRSLvaxjHpwKtmho7dSkKdTg749ieFjj 4fWWddqSv+QxU4JnoI+c7m847PsEn1+CdNnqnPk2py55YDzW/R3KXsmQ+/LTy/wbJhWb COeVSbisnMV1uywrKpNn4Rz6Rer/DZNHnHCnHKPahAhp2ul5k3oMxkde8ZE0oRe/aeJx YbPt9zBv/mmztlIP+WJ5NliJiGJzeVjCMAdM08crWVLJEiOBh4nsaZqf4ZQpgGJlm9qo 5JB2yDRY5M+4QHjalTdEHFElzXw8e9qRA7msU3Y50rwvLguG1wYIATPyr7NWi5EaRod0 0zgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=iOnU5DPTFBCJZrA4qYHBhE1N2Tyk26DBmuGxhFL9n1g=; b=WyJH3FJ0LcLkmDMbxOoaJVi2r6qRZ/MZUi4Q8y2jC3hIrHFVmAsY7CyjQ0WGwicUvH HCjl8tNj93uscvNFK2g+e6l4PBicIgg6ZEOd6zFPfzyXHE2TbPG1sHy3c4ZB7NTG+Hxh 5dEsNqJTtWaIgeW0HyUJfkwtXE5uhZHP0b8ee0lGPzxTP8xx05oLT7jXZIXYkXlq6817 LuBPRKEID7sknbrgmLms2dpOzdu5qq0tAiYAG2Lp6HrDOUPiQgB4UtqadDozeRXK2Vts LDLlseRlCyczjW0WCze0tgcJQ7VsRIuuhEAp5/jEIK+bU2njqYzfSBMpBCqdUuv2A06r +NbQ== X-Gm-Message-State: AOAM532sQnGdyWnGbYz/K/iwg6WvAieycXJ4SRL0FLBuSaGK338kzzJ0 wlnoAFe5uh3KeoNRRQ1YUhCrJBOfvQ== X-Google-Smtp-Source: ABdhPJw4YUpZ0RT9qBcliQ2izOnsbFPf8GMpwdFvZDBi6HblTGsyJ/O/36XXVEg1clIuKrmT01apzQ== X-Received: by 2002:a17:90a:b24:: with SMTP id 33mr1965843pjq.123.1596739794337; Thu, 06 Aug 2020 11:49:54 -0700 (PDT) Received: from ip-172-31-41-194.ap-northeast-1.compute.internal (ec2-52-199-21-241.ap-northeast-1.compute.amazonaws.com. [52.199.21.241]) by smtp.gmail.com with ESMTPSA id u24sm9096730pfm.20.2020.08.06.11.49.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 06 Aug 2020 11:49:53 -0700 (PDT) From: nao.horiguchi@gmail.com To: linux-mm@kvack.org Cc: mhocko@kernel.org, akpm@linux-foundation.org, mike.kravetz@oracle.com, osalvador@suse.de, tony.luck@intel.com, david@redhat.com, aneesh.kumar@linux.vnet.ibm.com, zeil@yandex-team.ru, cai@lca.pw, naoya.horiguchi@nec.com, linux-kernel@vger.kernel.org Subject: [PATCH v6 08/12] mm,hwpoison: Rework soft offline for in-use pages Date: Thu, 6 Aug 2020 18:49:19 +0000 Message-Id: <20200806184923.7007-9-nao.horiguchi@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200806184923.7007-1-nao.horiguchi@gmail.com> References: <20200806184923.7007-1-nao.horiguchi@gmail.com> X-Rspamd-Queue-Id: 03F16180442C2 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Oscar Salvador This patch changes the way we set and handle in-use poisoned pages. Until now, poisoned pages were released to the buddy allocator, trusting that the checks that take place prior to hand the page would act as a safe net and would skip that page. This has proved to be wrong, as we got some pfn walkers out there, like compaction, that all they care is the page to be PageBuddy and be in a freelist. Although this might not be the only user, having poisoned pages in the buddy allocator seems a bad idea as we should only have free pages that are ready and meant to be used as such. Before explaining the taken approach, let us break down the kind of pages we can soft offline. - Anonymous THP (after the split, they end up being 4K pages) - Hugetlb - Order-0 pages (that can be either migrated or invalited) * Normal pages (order-0 and anon-THP) - If they are clean and unmapped page cache pages, we invalidate then by means of invalidate_inode_page(). - If they are mapped/dirty, we do the isolate-and-migrate dance. Either way, do not call put_page directly from those paths. Instead, we keep the page and send it to page_set_poison to perform the right handling. page_set_poison sets the HWPoison flag and does the last put_page. This call to put_page is mainly to be able to call __page_cache_release, since this function is not exported. Down the chain, we placed a check for HWPoison page in free_pages_prepare, that just skips any poisoned page, so those pages do not end up in any pcplist/freelist. After that, we set the refcount on the page to 1 and we increment the poisoned pages counter. We could do as we do for free pages: 1) wait until the page hits buddy's freelists 2) take it off 3) flag it The problem is that we could race with an allocation, so by the time we want to take the page off the buddy, the page is already allocated, so we cannot soft-offline it. This is not fatal of course, but if it is better if we can close the race as does not require a lot of code. * Hugetlb pages - We isolate-and-migrate them After the migration has been successful, we call dissolve_free_huge_page, and we set HWPoison on the page if we succeed. Hugetlb has a slightly different handling though. While for non-hugetlb pages we cared about closing the race with an allocation, doing so for hugetlb pages requires quite some additional code (we would need to hook in free_huge_page and some other places). So I decided to not make the code overly complicated and just fail normally if the page we allocated in the meantime. Because of the way we handle now in-use pages, we no longer need the put-as-isolation-migratetype dance, that was guarding for poisoned pages to end up in pcplists. Signed-off-by: Oscar Salvador Signed-off-by: Naoya Horiguchi --- include/linux/page-flags.h | 5 ----- mm/memory-failure.c | 45 ++++++++++++++------------------------ mm/migrate.c | 11 +++------- mm/page_alloc.c | 28 ------------------------ 4 files changed, 19 insertions(+), 70 deletions(-) diff --git v5.8-rc7-mmotm-2020-07-27-18-18/include/linux/page-flags.h v5.8-rc7-mmotm-2020-07-27-18-18_patched/include/linux/page-flags.h index 9fa5d4e2d69a..d1df51ed6eeb 100644 --- v5.8-rc7-mmotm-2020-07-27-18-18/include/linux/page-flags.h +++ v5.8-rc7-mmotm-2020-07-27-18-18_patched/include/linux/page-flags.h @@ -422,14 +422,9 @@ PAGEFLAG_FALSE(Uncached) PAGEFLAG(HWPoison, hwpoison, PF_ANY) TESTSCFLAG(HWPoison, hwpoison, PF_ANY) #define __PG_HWPOISON (1UL << PG_hwpoison) -extern bool set_hwpoison_free_buddy_page(struct page *page); extern bool take_page_off_buddy(struct page *page); #else PAGEFLAG_FALSE(HWPoison) -static inline bool set_hwpoison_free_buddy_page(struct page *page) -{ - return 0; -} #define __PG_HWPOISON 0 #endif diff --git v5.8-rc7-mmotm-2020-07-27-18-18/mm/memory-failure.c v5.8-rc7-mmotm-2020-07-27-18-18_patched/mm/memory-failure.c index 0e619012e050..95bf8aa44a9a 100644 --- v5.8-rc7-mmotm-2020-07-27-18-18/mm/memory-failure.c +++ v5.8-rc7-mmotm-2020-07-27-18-18_patched/mm/memory-failure.c @@ -65,8 +65,12 @@ int sysctl_memory_failure_recovery __read_mostly = 1; atomic_long_t num_poisoned_pages __read_mostly = ATOMIC_LONG_INIT(0); -static void page_handle_poison(struct page *page) +static void page_handle_poison(struct page *page, bool release) { + if (release) { + put_page(page); + drain_all_pages(page_zone(page)); + } SetPageHWPoison(page); page_ref_inc(page); num_poisoned_pages_inc(); @@ -1756,19 +1760,13 @@ static int soft_offline_huge_page(struct page *page, int flags) ret = -EIO; } else { /* - * We set PG_hwpoison only when the migration source hugepage - * was successfully dissolved, because otherwise hwpoisoned - * hugepage remains on free hugepage list, then userspace will - * find it as SIGBUS by allocation failure. That's not expected - * in soft-offlining. + * We set PG_hwpoison only when we were able to take the page + * off the buddy. */ - ret = dissolve_free_huge_page(page); - if (!ret) { - if (set_hwpoison_free_buddy_page(page)) - num_poisoned_pages_inc(); - else - ret = -EBUSY; - } + if (!dissolve_free_huge_page(page) && take_page_off_buddy(page)) + page_handle_poison(page, false); + else + ret = -EBUSY; } return ret; } @@ -1807,10 +1805,8 @@ static int __soft_offline_page(struct page *page, int flags) * would need to fix isolation locking first. */ if (ret == 1) { - put_page(page); pr_info("soft_offline: %#lx: invalidated\n", pfn); - SetPageHWPoison(page); - num_poisoned_pages_inc(); + page_handle_poison(page, true); return 0; } @@ -1841,7 +1837,9 @@ static int __soft_offline_page(struct page *page, int flags) list_add(&page->lru, &pagelist); ret = migrate_pages(&pagelist, alloc_migration_target, NULL, (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_FAILURE); - if (ret) { + if (!ret) { + page_handle_poison(page, true); + } else { if (!list_empty(&pagelist)) putback_movable_pages(&pagelist); @@ -1860,27 +1858,16 @@ static int __soft_offline_page(struct page *page, int flags) static int soft_offline_in_use_page(struct page *page, int flags) { int ret; - int mt; struct page *hpage = compound_head(page); if (!PageHuge(page) && PageTransHuge(hpage)) if (try_to_split_thp_page(page, "soft offline") < 0) return -EBUSY; - /* - * Setting MIGRATE_ISOLATE here ensures that the page will be linked - * to free list immediately (not via pcplist) when released after - * successful page migration. Otherwise we can't guarantee that the - * page is really free after put_page() returns, so - * set_hwpoison_free_buddy_page() highly likely fails. - */ - mt = get_pageblock_migratetype(page); - set_pageblock_migratetype(page, MIGRATE_ISOLATE); if (PageHuge(page)) ret = soft_offline_huge_page(page, flags); else ret = __soft_offline_page(page, flags); - set_pageblock_migratetype(page, mt); return ret; } @@ -1889,7 +1876,7 @@ static int soft_offline_free_page(struct page *page) int rc = -EBUSY; if (!dissolve_free_huge_page(page) && take_page_off_buddy(page)) { - page_handle_poison(page); + page_handle_poison(page, false); rc = 0; } diff --git v5.8-rc7-mmotm-2020-07-27-18-18/mm/migrate.c v5.8-rc7-mmotm-2020-07-27-18-18_patched/mm/migrate.c index 2c809ffcf0e1..d7a9379c343b 100644 --- v5.8-rc7-mmotm-2020-07-27-18-18/mm/migrate.c +++ v5.8-rc7-mmotm-2020-07-27-18-18_patched/mm/migrate.c @@ -1222,16 +1222,11 @@ static int unmap_and_move(new_page_t get_new_page, * we want to retry. */ if (rc == MIGRATEPAGE_SUCCESS) { - put_page(page); - if (reason == MR_MEMORY_FAILURE) { + if (reason != MR_MEMORY_FAILURE) /* - * Set PG_HWPoison on just freed page - * intentionally. Although it's rather weird, - * it's how HWPoison flag works at the moment. + * We release the page in page_handle_poison. */ - if (set_hwpoison_free_buddy_page(page)) - num_poisoned_pages_inc(); - } + put_page(page); } else { if (rc != -EAGAIN) { if (likely(!__PageMovable(page))) { diff --git v5.8-rc7-mmotm-2020-07-27-18-18/mm/page_alloc.c v5.8-rc7-mmotm-2020-07-27-18-18_patched/mm/page_alloc.c index aab89f7db4ac..e4896e674594 100644 --- v5.8-rc7-mmotm-2020-07-27-18-18/mm/page_alloc.c +++ v5.8-rc7-mmotm-2020-07-27-18-18_patched/mm/page_alloc.c @@ -8843,32 +8843,4 @@ bool take_page_off_buddy(struct page *page) spin_unlock_irqrestore(&zone->lock, flags); return ret; } - -/* - * Set PG_hwpoison flag if a given page is confirmed to be a free page. This - * test is performed under the zone lock to prevent a race against page - * allocation. - */ -bool set_hwpoison_free_buddy_page(struct page *page) -{ - struct zone *zone = page_zone(page); - unsigned long pfn = page_to_pfn(page); - unsigned long flags; - unsigned int order; - bool hwpoisoned = false; - - spin_lock_irqsave(&zone->lock, flags); - for (order = 0; order < MAX_ORDER; order++) { - struct page *page_head = page - (pfn & ((1 << order) - 1)); - - if (PageBuddy(page_head) && page_order(page_head) >= order) { - if (!TestSetPageHWPoison(page)) - hwpoisoned = true; - break; - } - } - spin_unlock_irqrestore(&zone->lock, flags); - - return hwpoisoned; -} #endif