From patchwork Fri Nov 13 20:53:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 11904561 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 00AA814C0 for ; Fri, 13 Nov 2020 20:54:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 980802224D for ; Fri, 13 Nov 2020 20:54:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YeqnpYRM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 980802224D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B4B316B0072; Fri, 13 Nov 2020 15:54:18 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AFBB96B0073; Fri, 13 Nov 2020 15:54:18 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9C3ED6B0074; Fri, 13 Nov 2020 15:54:18 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0139.hostedemail.com [216.40.44.139]) by kanga.kvack.org (Postfix) with ESMTP id 63DBE6B0072 for ; Fri, 13 Nov 2020 15:54:18 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id F0C618249980 for ; Fri, 13 Nov 2020 20:54:17 +0000 (UTC) X-FDA: 77480597754.08.knife25_4e0d7d327312 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id CD33E1819E766 for ; Fri, 13 Nov 2020 20:54:17 +0000 (UTC) X-Spam-Summary: 1,0,0,a34f0bcfd8c19ff0,d41d8cd98f00b204,shy828301@gmail.com,,RULES_HIT:2:41:69:355:379:541:800:960:966:973:988:989:1202:1260:1311:1314:1345:1359:1437:1515:1535:1605:1730:1747:1777:1792:2196:2199:2393:2559:2562:2693:2899:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4049:4120:4250:4385:4605:5007:6261:6653:7514:7903:8660:8957:9010:9121:9413:9592:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:13148:13161:13172:13229:13230:13894:14096:14687:21060:21080:21444:21450:21451:21627:21666:21796:21939:21987:21990:30012:30034:30036:30054:30064:30069:30070:30091,0,RBL:209.85.214.194:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100;04y8td4fptzpim3un4fa9wzwpapeoocxmw518mamf9rkhfnixnw5son9cb9gdsy.fen7r88ubhd4wkqdbb6qb4d9h1ha3xhnm4nwgeof6uu8qs4g6yyz34bqph5coyo.q-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Cust om_rules X-HE-Tag: knife25_4e0d7d327312 X-Filterd-Recvd-Size: 9580 Received: from mail-pl1-f194.google.com (mail-pl1-f194.google.com [209.85.214.194]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Nov 2020 20:54:17 +0000 (UTC) Received: by mail-pl1-f194.google.com with SMTP id 35so946619ple.12 for ; Fri, 13 Nov 2020 12:54:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=w/5/DKL41ubRzJ/0MsKC6N7liZIas0t4NHg8fh0y8+o=; b=YeqnpYRMJECYlwJ2i2oiA/xvCtR71OaGKOOko2ziHP5enygvVDKINEAYl3KzCWQTCg j5Hwb/UzoOLZbM0FsvhCySODKxX7xgnnKdKGWuBjlRp6VhzJmzfcXPIyzsLdMjj+RUB6 mpmybMms5QxRkuimg5U9e4vWQtwGK4Jqkx/UgqXZbKNe06TGdTayYItf3L7ldmkov48d abciVBnfgr3V8Zt2HMCThmnixCAcElVFDluYTczvoHo4z4QqTg2dyzOOTjWG3cF62QS7 GjYuqNrPR+5QWwEasRwSBA6FJntVNs9thSrSoi6lov+wXpaaSk0xpO8NAKyuNp4FYCrX jzxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=w/5/DKL41ubRzJ/0MsKC6N7liZIas0t4NHg8fh0y8+o=; b=PrsRZa6+rN3AoibPgJGaFPY7P6gU7WxxVKshF9+/66fqKnm0O3UVtAyKebyIig2Bw2 lNt2+ZoXaKHbkqMDmMlU8iHFj4fiqa+BdWHpTZvtS6AAVzWuuxcxHwZUVzSvY3Rjr9q8 HTr4A5X1KeVPkOakpbf7zZ68aBndCEtOCi1/JT5EPifYtQhDGPE/1C7XG7W6Mi1pux0u 3ibDXRD3ArTOmFVtywF5h/C/0irml93HebBTogz9zAV8/ux82rFt0PYrtg0OaPSrHdWo eH6nUxwlTt03tacrNjrEffD1oPW2+MEEGK+H3WwWm3c7rPUH//40SaX0Qh0Lkj3VJkXh 79DA== X-Gm-Message-State: AOAM530nZMhQDvVUwXn+BQ8/yXpApEVrtkLPx95mCYysE4sBueOjzMpa v0oYtGLwjYJOtwLLjQCm/wk= X-Google-Smtp-Source: ABdhPJxCL1PoHkAHZw7It17Cn2Zpv/9MQm4Qe6DPRylh0kRI3D6xZbD/NPTSe83+1CWb81oYDl9Zaw== X-Received: by 2002:a17:902:d204:b029:d8:d024:e67 with SMTP id t4-20020a170902d204b02900d8d0240e67mr3700949ply.25.1605300856285; Fri, 13 Nov 2020 12:54:16 -0800 (PST) Received: from localhost.localdomain (c-107-3-138-210.hsd1.ca.comcast.net. [107.3.138.210]) by smtp.gmail.com with ESMTPSA id a18sm3780234pfa.151.2020.11.13.12.54.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 13 Nov 2020 12:54:15 -0800 (PST) From: Yang Shi To: mhocko@suse.com, ziy@nvidia.com, songliubraving@fb.com, mgorman@suse.de, jack@suse.cz, willy@infradead.org, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 2/5] mm: migrate: simplify the logic for handling permanent failure Date: Fri, 13 Nov 2020 12:53:56 -0800 Message-Id: <20201113205359.556831-3-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201113205359.556831-1-shy828301@gmail.com> References: <20201113205359.556831-1-shy828301@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When unmap_and_move{_huge_page}() returns !-EAGAIN and !MIGRATEPAGE_SUCCESS, the page would be put back to LRU or proper list if it is non-LRU movable page. But, the callers always call putback_movable_pages() to put the failed pages back later on, so it seems not very efficient to put every single page back immediately, and the code looks convoluted. Put the failed page on a separate list, then splice the list to migrate list when all pages are tried. It is the caller's responsibility to call putback_movable_pages() to handle failures. This also makes the code simpler and more readable. After the change the rules are: * Success: non hugetlb page will be freed, hugetlb page will be put back * -EAGAIN: stay on the from list * -ENOMEM: stay on the from list * Other errno: put on ret_pages list then splice to from list The from list would be empty iff all pages are migrated successfully, it was not so before. This has no impact to current existing callsites. Reviewed-by: Zi Yan Signed-off-by: Yang Shi --- mm/migrate.c | 68 +++++++++++++++++++++++++++++----------------------- 1 file changed, 38 insertions(+), 30 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 8a2e7e19e27b..d7167f7107bd 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1169,7 +1169,8 @@ static int unmap_and_move(new_page_t get_new_page, free_page_t put_new_page, unsigned long private, struct page *page, int force, enum migrate_mode mode, - enum migrate_reason reason) + enum migrate_reason reason, + struct list_head *ret) { int rc = MIGRATEPAGE_SUCCESS; struct page *newpage = NULL; @@ -1206,7 +1207,14 @@ static int unmap_and_move(new_page_t get_new_page, * migrated will have kept its references and be restored. */ list_del(&page->lru); + } + /* + * If migration is successful, releases reference grabbed during + * isolation. Otherwise, restore the page to right list unless + * we want to retry. + */ + if (rc == MIGRATEPAGE_SUCCESS) { /* * Compaction can migrate also non-LRU pages which are * not accounted to NR_ISOLATED_*. They can be recognized @@ -1215,35 +1223,16 @@ static int unmap_and_move(new_page_t get_new_page, if (likely(!__PageMovable(page))) mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_is_file_lru(page), -thp_nr_pages(page)); - } - /* - * If migration is successful, releases reference grabbed during - * isolation. Otherwise, restore the page to right list unless - * we want to retry. - */ - if (rc == MIGRATEPAGE_SUCCESS) { if (reason != MR_MEMORY_FAILURE) /* * We release the page in page_handle_poison. */ put_page(page); } else { - if (rc != -EAGAIN) { - if (likely(!__PageMovable(page))) { - putback_lru_page(page); - goto put_new; - } + if (rc != -EAGAIN) + list_add_tail(&page->lru, ret); - lock_page(page); - if (PageMovable(page)) - putback_movable_page(page); - else - __ClearPageIsolated(page); - unlock_page(page); - put_page(page); - } -put_new: if (put_new_page) put_new_page(newpage, private); else @@ -1274,7 +1263,8 @@ static int unmap_and_move(new_page_t get_new_page, static int unmap_and_move_huge_page(new_page_t get_new_page, free_page_t put_new_page, unsigned long private, struct page *hpage, int force, - enum migrate_mode mode, int reason) + enum migrate_mode mode, int reason, + struct list_head *ret) { int rc = -EAGAIN; int page_was_mapped = 0; @@ -1290,7 +1280,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page, * kicking migration. */ if (!hugepage_migration_supported(page_hstate(hpage))) { - putback_active_hugepage(hpage); + list_move_tail(&hpage->lru, ret); return -ENOSYS; } @@ -1372,8 +1362,10 @@ static int unmap_and_move_huge_page(new_page_t get_new_page, out_unlock: unlock_page(hpage); out: - if (rc != -EAGAIN) + if (rc == MIGRATEPAGE_SUCCESS) putback_active_hugepage(hpage); + else if (rc != -EAGAIN && rc != MIGRATEPAGE_SUCCESS) + list_move_tail(&hpage->lru, ret); /* * If migration was not successful and there's a freeing callback, use @@ -1404,8 +1396,8 @@ static int unmap_and_move_huge_page(new_page_t get_new_page, * * The function returns after 10 attempts or if no pages are movable any more * because the list has become empty or no retryable pages exist any more. - * The caller should call putback_movable_pages() to return pages to the LRU - * or free list only if ret != 0. + * It is caller's responsibility to call putback_movable_pages() to return pages + * to the LRU or free list only if ret != 0. * * Returns the number of pages that were not migrated, or an error code. */ @@ -1426,6 +1418,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, struct page *page2; int swapwrite = current->flags & PF_SWAPWRITE; int rc, nr_subpages; + LIST_HEAD(ret_pages); if (!swapwrite) current->flags |= PF_SWAPWRITE; @@ -1448,12 +1441,21 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, if (PageHuge(page)) rc = unmap_and_move_huge_page(get_new_page, put_new_page, private, page, - pass > 2, mode, reason); + pass > 2, mode, reason, + &ret_pages); else rc = unmap_and_move(get_new_page, put_new_page, private, page, pass > 2, mode, - reason); - + reason, &ret_pages); + /* + * The rules are: + * Success: non hugetlb page will be freed, hugetlb + * page will be put back + * -EAGAIN: stay on the from list + * -ENOMEM: stay on the from list + * Other errno: put on ret_pages list then splice to + * from list + */ switch(rc) { case -ENOMEM: /* @@ -1519,6 +1521,12 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, nr_thp_failed += thp_retry; rc = nr_failed; out: + /* + * Put the permanent failure page back to migration list, they + * will be put back to the right list by the caller. + */ + list_splice(&ret_pages, from); + count_vm_events(PGMIGRATE_SUCCESS, nr_succeeded); count_vm_events(PGMIGRATE_FAIL, nr_failed); count_vm_events(THP_MIGRATION_SUCCESS, nr_thp_succeeded);